Sample records for maximum time uncertainty

  1. Reduction in maximum time uncertainty of paired time signals

    DOEpatents

    Theodosiou, G.E.; Dawson, J.W.

    1983-10-04

    Reduction in the maximum time uncertainty (t[sub max]--t[sub min]) of a series of paired time signals t[sub 1] and t[sub 2] varying between two input terminals and representative of a series of single events where t[sub 1][<=]t[sub 2] and t[sub 1]+t[sub 2] equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t[sub min]) of the first signal t[sub 1] closer to t[sub max] and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20--800. 6 figs.

  2. Reduction in maximum time uncertainty of paired time signals

    DOEpatents

    Theodosiou, George E.; Dawson, John W.

    1983-01-01

    Reduction in the maximum time uncertainty (t.sub.max -t.sub.min) of a series of paired time signals t.sub.1 and t.sub.2 varying between two input terminals and representative of a series of single events where t.sub.1 .ltoreq.t.sub.2 and t.sub.1 +t.sub.2 equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t.sub.min) of the first signal t.sub.1 closer to t.sub.max and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20-800.

  3. Underwater passive acoustic localization of Pacific walruses in the northeastern Chukchi Sea.

    PubMed

    Rideout, Brendan P; Dosso, Stan E; Hannay, David E

    2013-09-01

    This paper develops and applies a linearized Bayesian localization algorithm based on acoustic arrival times of marine mammal vocalizations at spatially-separated receivers which provides three-dimensional (3D) location estimates with rigorous uncertainty analysis. To properly account for uncertainty in receiver parameters (3D hydrophone locations and synchronization times) and environmental parameters (water depth and sound-speed correction), these quantities are treated as unknowns constrained by prior estimates and prior uncertainties. Unknown scaling factors on both the prior and arrival-time uncertainties are estimated by minimizing Akaike's Bayesian information criterion (a maximum entropy condition). Maximum a posteriori estimates for sound source locations and times, receiver parameters, and environmental parameters are calculated simultaneously using measurements of arrival times for direct and interface-reflected acoustic paths. Posterior uncertainties for all unknowns incorporate both arrival time and prior uncertainties. Monte Carlo simulation results demonstrate that, for the cases considered here, linearization errors are small and the lack of an accurate sound-speed profile does not cause significant biases in the estimated locations. A sequence of Pacific walrus vocalizations, recorded in the Chukchi Sea northwest of Alaska, is localized using this technique, yielding a track estimate and uncertainties with an estimated speed comparable to normal walrus swim speeds.

  4. Application of Bayesian Maximum Entropy Filter in parameter calibration of groundwater flow model in PingTung Plain

    NASA Astrophysics Data System (ADS)

    Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung

    2017-04-01

    Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.

  5. Reduction in maximum time uncertainty of paired time signals

    DOEpatents

    Theodosiou, G.E.; Dawson, J.W.

    1981-02-11

    Reduction in the maximum time uncertainty (t/sub max/ - t/sub min/) of a series of paired time signals t/sub 1/ and t/sub 2/ varying between two input terminals and representative of a series of single events where t/sub 1/ less than or equal to t/sub 2/ and t/sub 1/ + t/sub 2/ equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t/sub min/) of the first signal t/sub 1/ closer to t/sub max/ and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20 to 800.

  6. Maximum warming occurs about one decade after a carbon dioxide emission

    NASA Astrophysics Data System (ADS)

    Ricke, Katharine L.; Caldeira, Ken

    2014-12-01

    It is known that carbon dioxide emissions cause the Earth to warm, but no previous study has focused on examining how long it takes to reach maximum warming following a particular CO2 emission. Using conjoined results of carbon-cycle and physical-climate model intercomparison projects (Taylor et al 2012, Joos et al 2013), we find the median time between an emission and maximum warming is 10.1 years, with a 90% probability range of 6.6-30.7 years. We evaluate uncertainties in timing and amount of warming, partitioning them into three contributing factors: carbon cycle, climate sensitivity and ocean thermal inertia. If uncertainty in any one factor is reduced to zero without reducing uncertainty in the other factors, the majority of overall uncertainty remains. Thus, narrowing uncertainty in century-scale warming depends on narrowing uncertainty in all contributing factors. Our results indicate that benefit from avoided climate damage from avoided CO2 emissions will be manifested within the lifetimes of people who acted to avoid that emission. While such avoidance could be expected to benefit future generations, there is potential for emissions avoidance to provide substantial benefit to current generations.

  7. A modified ATI technique for nowcasting convective rain volumes over areas. [area-time integrals

    NASA Technical Reports Server (NTRS)

    Makarau, Amos; Johnson, L. Ronald; Doneaud, Andre A.

    1988-01-01

    This paper explores the applicability of the area-time-integral (ATI) technique for the estimation of the growth portion only of a convective storm (while the rain volume is computed using the entire life history of the event) and for nowcasting the total rain volume of a convective system at the stage of its maximum development. For these purposes, the ATIs were computed from the digital radar data (for 1981-1982) from the North Dakota Cloud Modification Project, using the maximum echo area (ATIA) no less than 25 dBz, the maximum reflectivity, and the maximum echo height as the end of the growth portion of the convective event. Linear regression analysis demonstrated that correlations between total rain volume or the maximum rain volume versus ATIA were the strongest. The uncertainties obtained were comparable to the uncertainties which typically occur in rain volume estimates obtained from radar data employing Z-R conversion followed by space and time integration. This demonstrates that the total rain volume of a storm can be nowcasted at its maximum stage of development.

  8. Analysis of actuator delay and its effect on uncertainty quantification for real-time hybrid simulation

    NASA Astrophysics Data System (ADS)

    Chen, Cheng; Xu, Weijie; Guo, Tong; Chen, Kai

    2017-10-01

    Uncertainties in structure properties can result in different responses in hybrid simulations. Quantification of the effect of these uncertainties would enable researchers to estimate the variances of structural responses observed from experiments. This poses challenges for real-time hybrid simulation (RTHS) due to the existence of actuator delay. Polynomial chaos expansion (PCE) projects the model outputs on a basis of orthogonal stochastic polynomials to account for influences of model uncertainties. In this paper, PCE is utilized to evaluate effect of actuator delay on the maximum displacement from real-time hybrid simulation of a single degree of freedom (SDOF) structure when accounting for uncertainties in structural properties. The PCE is first applied for RTHS without delay to determine the order of PCE, the number of sample points as well as the method for coefficients calculation. The PCE is then applied to RTHS with actuator delay. The mean, variance and Sobol indices are compared and discussed to evaluate the effects of actuator delay on uncertainty quantification for RTHS. Results show that the mean and the variance of the maximum displacement increase linearly and exponentially with respect to actuator delay, respectively. Sensitivity analysis through Sobol indices also indicates the influence of the single random variable decreases while the coupling effect increases with the increase of actuator delay.

  9. Dynamics of entropic uncertainty for atoms immersed in thermal fluctuating massless scalar field

    NASA Astrophysics Data System (ADS)

    Huang, Zhiming

    2018-04-01

    In this article, the dynamics of quantum memory-assisted entropic uncertainty relation for two atoms immersed in a thermal bath of fluctuating massless scalar field is investigated. The master equation that governs the system evolution process is derived. It is found that the mixedness is closely associated with entropic uncertainty. For equilibrium state, the tightness of uncertainty vanishes. For the initial maximum entangled state, the tightness of uncertainty undergoes a slight increase and then declines to zero with evolution time. It is found that temperature can increase the uncertainty, but two-atom separation does not always increase the uncertainty. The uncertainty evolves to different relatively stable values for different temperatures and converges to a fixed value for different two-atom distances with evolution time. Furthermore, weak measurement reversal is employed to control the entropic uncertainty.

  10. Picosecond timing resolution detection of ggr-photons utilizing microchannel-plate detectors: experimental tests of quantum nonlocality and photon localization

    NASA Astrophysics Data System (ADS)

    Irby, Victor D.

    2004-09-01

    The concept and subsequent experimental verification of the proportionality between pulse amplitude and detector transit time for microchannel-plate detectors is presented. This discovery has led to considerable improvement in the overall timing resolution for detection of high-energy ggr-photons. Utilizing a 22Na positron source, a full width half maximum (FWHM) timing resolution of 138 ps has been achieved. This FWHM includes detector transit-time spread for both chevron-stack-type detectors, timing spread due to uncertainties in annihilation location, all electronic uncertainty and any remaining quantum mechanical uncertainty. The first measurement of the minimum quantum uncertainty in the time interval between detection of the two annihilation photons is reported. The experimental results give strong evidence against instantaneous spatial localization of ggr-photons due to measurement-induced nonlocal quantum wavefunction collapse. The experimental results are also the first that imply momentum is conserved only after the quantum uncertainty in time has elapsed (Yukawa H 1935 Proc. Phys. Math. Soc. Japan 17 48).

  11. Study of consensus-based time synchronization in wireless sensor networks.

    PubMed

    He, Jianping; Li, Hao; Chen, Jiming; Cheng, Peng

    2014-03-01

    Recently, various consensus-based protocols have been developed for time synchronization in wireless sensor networks. However, due to the uncertainties lying in both the hardware fabrication and network communication processes, it is not clear how most of the protocols will perform in real implementations. In order to reduce such gap, this paper investigates whether and how the typical consensus-based time synchronization protocols can tolerate the uncertainties in practical sensor networks through extensive testbed experiments. For two typical protocols, i.e., Average Time Synchronization (ATS) and Maximum Time Synchronization (MTS), we first analyze how the time synchronization accuracy will be affected by various uncertainties in the system. Then, we implement both protocols on our sensor network testbed consisted of Micaz nodes, and investigate the time synchronization performance and robustness under various network settings. Noticing that the synchronized clocks under MTS may be slightly faster than the desirable clock, by adopting both maximum consensus and minimum consensus, we propose a modified protocol, MMTS, which is able to drive the synchronized clocks closer to the desirable clock while maintaining the convergence rate and synchronization accuracy of MTS. © 2013 ISA. Published by ISA. All rights reserved.

  12. Are Subject-Specific Musculoskeletal Models Robust to the Uncertainties in Parameter Identification?

    PubMed Central

    Valente, Giordano; Pitto, Lorenzo; Testi, Debora; Seth, Ajay; Delp, Scott L.; Stagni, Rita; Viceconti, Marco; Taddei, Fulvia

    2014-01-01

    Subject-specific musculoskeletal modeling can be applied to study musculoskeletal disorders, allowing inclusion of personalized anatomy and properties. Independent of the tools used for model creation, there are unavoidable uncertainties associated with parameter identification, whose effect on model predictions is still not fully understood. The aim of the present study was to analyze the sensitivity of subject-specific model predictions (i.e., joint angles, joint moments, muscle and joint contact forces) during walking to the uncertainties in the identification of body landmark positions, maximum muscle tension and musculotendon geometry. To this aim, we created an MRI-based musculoskeletal model of the lower limbs, defined as a 7-segment, 10-degree-of-freedom articulated linkage, actuated by 84 musculotendon units. We then performed a Monte-Carlo probabilistic analysis perturbing model parameters according to their uncertainty, and solving a typical inverse dynamics and static optimization problem using 500 models that included the different sets of perturbed variable values. Model creation and gait simulations were performed by using freely available software that we developed to standardize the process of model creation, integrate with OpenSim and create probabilistic simulations of movement. The uncertainties in input variables had a moderate effect on model predictions, as muscle and joint contact forces showed maximum standard deviation of 0.3 times body-weight and maximum range of 2.1 times body-weight. In addition, the output variables significantly correlated with few input variables (up to 7 out of 312) across the gait cycle, including the geometry definition of larger muscles and the maximum muscle tension in limited gait portions. Although we found subject-specific models not markedly sensitive to parameter identification, researchers should be aware of the model precision in relation to the intended application. In fact, force predictions could be affected by an uncertainty in the same order of magnitude of its value, although this condition has low probability to occur. PMID:25390896

  13. A double-gaussian, percentile-based method for estimating maximum blood flow velocity.

    PubMed

    Marzban, Caren; Illian, Paul R; Morison, David; Mourad, Pierre D

    2013-11-01

    Transcranial Doppler sonography allows for the estimation of blood flow velocity, whose maximum value, especially at systole, is often of clinical interest. Given that observed values of flow velocity are subject to noise, a useful notion of "maximum" requires a criterion for separating the signal from the noise. All commonly used criteria produce a point estimate (ie, a single value) of maximum flow velocity at any time and therefore convey no information on the distribution or uncertainty of flow velocity. This limitation has clinical consequences especially for patients in vasospasm, whose largest flow velocities can be difficult to measure. Therefore, a method for estimating flow velocity and its uncertainty is desirable. A gaussian mixture model is used to separate the noise from the signal distribution. The time series of a given percentile of the latter, then, provides a flow velocity envelope. This means of estimating the flow velocity envelope naturally allows for displaying several percentiles (e.g., 95th and 99th), thereby conveying uncertainty in the highest flow velocity. Such envelopes were computed for 59 patients and were shown to provide reasonable and useful estimates of the largest flow velocities compared to a standard algorithm. Moreover, we found that the commonly used envelope was generally consistent with the 90th percentile of the signal distribution derived via the gaussian mixture model. Separating the observed distribution of flow velocity into a noise component and a signal component, using a double-gaussian mixture model, allows for the percentiles of the latter to provide meaningful measures of the largest flow velocities and their uncertainty.

  14. Impacts of tides on tsunami propagation due to potential Nankai Trough earthquakes in the Seto Inland Sea, Japan

    NASA Astrophysics Data System (ADS)

    Lee, Han Soo; Shimoyama, Tomohisa; Popinet, Stéphane

    2015-10-01

    The impacts of tides on extreme tsunami propagation due to potential Nankai Trough earthquakes in the Seto Inland Sea (SIS), Japan, are investigated through numerical experiments. Tsunami experiments are conducted based on five scenarios that consider tides at four different phases, such as flood, high, ebb, and low tides. The probes that were selected arbitrarily in the Bungo and Kii Channels show less significant effects of tides on tsunami heights and the arrival times of the first waves than those that experience large tidal ranges in inner basins and bays of the SIS. For instance, the maximum tsunami height and the arrival time at Toyomaesi differ by more than 0.5 m and nearly 1 h, respectively, depending on the tidal phase. The uncertainties defined in terms of calculated maximum tsunami heights due to tides illustrate that the calculated maximum tsunami heights in the inner SIS with standing tides have much larger uncertainties than those of two channels with propagating tides. Particularly in Harima Nada, the uncertainties due to the impacts of tides are greater than 50% of the tsunami heights without tidal interaction. The results recommend simulate tsunamis together with tides in shallow water environments to reduce the uncertainties involved with tsunami modeling and predictions for tsunami hazards preparedness. This article was corrected on 26 OCT 2015. See the end of the full text for details.

  15. County-Level Climate Uncertainty for Risk Assessments: Volume 18 Appendix Q - Historical Maximum Near-Surface Wind Speed.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconom ic impacts. The full report is contained in 27 volumes.« less

  16. County-Level Climate Uncertainty for Risk Assessments: Volume 4 Appendix C - Historical Maximum Near-Surface Air Temperature.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.

    2017-06-01

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less

  17. The Effect of Suspension-Line Length on Viking Parachute Inflation Loads

    NASA Technical Reports Server (NTRS)

    Talay, Theodore A.; Poole, Lamont R.; Whitlock, Charles H.

    1971-01-01

    Analytical calculations have considered the effect on maximum load of increasing the suspension-line length on the Viking parachute. Results indicate that unfurling time is increased to 1.85 seconds from 1.45 seconds, and that maximum loads are increased approximately 5 percent with an uncertainty of -4 percent to +3 percent.

  18. Multimodal Likelihoods in Educational Assessment: Will the Real Maximum Likelihood Score Please Stand up?

    ERIC Educational Resources Information Center

    Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike

    2011-01-01

    It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…

  19. Assessing market uncertainty by means of a time-varying intermittency parameter for asset price fluctuations

    NASA Astrophysics Data System (ADS)

    Rypdal, Martin; Sirnes, Espen; Løvsletten, Ola; Rypdal, Kristoffer

    2013-08-01

    Maximum likelihood estimation techniques for multifractal processes are applied to high-frequency data in order to quantify intermittency in the fluctuations of asset prices. From time records as short as one month these methods permit extraction of a meaningful intermittency parameter λ characterising the degree of volatility clustering. We can therefore study the time evolution of volatility clustering and test the statistical significance of this variability. By analysing data from the Oslo Stock Exchange, and comparing the results with the investment grade spread, we find that the estimates of λ are lower at times of high market uncertainty.

  20. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  1. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE PAGES

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...

    2018-05-01

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  2. Robust distributed model predictive control of linear systems with structured time-varying uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Langwen; Xie, Wei; Wang, Jingcheng

    2017-11-01

    In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.

  3. County-Level Climate Uncertainty for Risk Assessments: Volume 14 Appendix M - Historical Surface Runoff.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M

    2017-06-01

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less

  4. County-Level Climate Uncertainty for Risk Assessments: Volume 15 Appendix N - Forecast Surface Runoff.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.

    2017-05-01

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less

  5. County-Level Climate Uncertainty for Risk Assessments: Volume 10 Appendix I - Historical Evaporation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.

    2017-06-01

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less

  6. County-Level Climate Uncertainty for Risk Assessments: Volume 8 Appendix G - Historical Precipitation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.

    2017-06-01

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less

  7. County-Level Climate Uncertainty for Risk Assessments: Volume 12 Appendix K - Historical Rel. Humidity.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M

    2017-06-01

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less

  8. County-Level Climate Uncertainty for Risk Assessments: Volume 23 Appendix V - Forecast Sea Ice Thickness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.

    2017-04-01

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less

  9. County-Level Climate Uncertainty for Risk Assessments: Volume 24 Appendix W - Historical Sea Ice Age.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M

    2017-05-01

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less

  10. County-Level Climate Uncertainty for Risk Assessments: Volume 22 Appendix U - Historical Sea Ice Thickness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M

    2017-06-01

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less

  11. County-Level Climate Uncertainty for Risk Assessments: Volume 21 Appendix T - Forecast Sea Ice Area Fraction.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.

    2017-06-01

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less

  12. County-Level Climate Uncertainty for Risk Assessments: Volume 25 Appendix X - Forecast Sea Ice Age.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.

    2017-05-01

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less

  13. County-Level Climate Uncertainty for Risk Assessments: Volume 27 Appendix Z - Forecast Ridging Rate.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.

    2017-06-01

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less

  14. County-Level Climate Uncertainty for Risk Assessments: Volume 17 Appendix P - Forecast Soil Moisture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less

  15. County-Level Climate Uncertainty for Risk Assessments: Volume 16 Appendix O - Historical Soil Moisture.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less

  16. County-Level Climate Uncertainty for Risk Assessments: Volume 6 Appendix E - Historical Minimum Near-Surface Air Temperature.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.

    2017-06-01

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less

  17. County-Level Climate Uncertainty for Risk Assessments: Volume 26 Appendix Y - Historical Ridging Rate.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.

    2017-05-01

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less

  18. County-Level Climate Uncertainty for Risk Assessments: Volume 2 Appendix A - Historical Near-Surface Air Temperature.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.

    2017-06-01

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less

  19. County-Level Climate Uncertainty for Risk Assessments: Volume 1.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less

  20. Stimulus uncertainty enhances long-term potentiation-like plasticity in human motor cortex.

    PubMed

    Sale, Martin V; Nydam, Abbey S; Mattingley, Jason B

    2017-03-01

    Plasticity can be induced in human cortex using paired associative stimulation (PAS), which repeatedly and predictably pairs a peripheral electrical stimulus with transcranial magnetic stimulation (TMS) to the contralateral motor region. Many studies have reported small or inconsistent effects of PAS. Given that uncertain stimuli can promote learning, the predictable nature of the stimulation in conventional PAS paradigms might serve to attenuate plasticity induction. Here, we introduced stimulus uncertainty into the PAS paradigm to investigate if it can boost plasticity induction. Across two experimental sessions, participants (n = 28) received a modified PAS paradigm consisting of a random combination of 90 paired stimuli and 90 unpaired (TMS-only) stimuli. Prior to each of these stimuli, participants also received an auditory cue which either reliably predicted whether the upcoming stimulus was paired or unpaired (no uncertainty condition) or did not predict the upcoming stimulus (maximum uncertainty condition). Motor evoked potentials (MEPs) evoked from abductor pollicis brevis (APB) muscle quantified cortical excitability before and after PAS. MEP amplitude increased significantly 15 min following PAS in the maximum uncertainty condition. There was no reliable change in MEP amplitude in the no uncertainty condition, nor between post-PAS MEP amplitudes across the two conditions. These results suggest that stimulus uncertainty may provide a novel means to enhance plasticity induction with the PAS paradigm in human motor cortex. To provide further support to the notion that stimulus uncertainty and prediction error promote plasticity, future studies should further explore the time course of these changes, and investigate what aspects of stimulus uncertainty are critical in boosting plasticity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty from maximum temperature metric selection.

    PubMed

    DeWeber, Jefferson T; Wagner, Tyler

    2018-06-01

    Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30-day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species' distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold-water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid-century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation actions. Our study demonstrates that even relatively small differences in the definitions of climate metrics can result in very different projections and reveal high uncertainty in predicted climate change effects. © 2018 John Wiley & Sons Ltd.

  2. Probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty from maximum temperature metric selection

    USGS Publications Warehouse

    DeWeber, Jefferson T.; Wagner, Tyler

    2018-01-01

    Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30‐day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species’ distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold‐water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid‐century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation actions. Our study demonstrates that even relatively small differences in the definitions of climate metrics can result in very different projections and reveal high uncertainty in predicted climate change effects.

  3. Analysis of error in TOMS total ozone as a function of orbit and attitude parameters

    NASA Technical Reports Server (NTRS)

    Gregg, W. W.; Ardanuy, P. E.; Braun, W. C.; Vallette, B. J.; Bhartia, P. K.; Ray, S. N.

    1991-01-01

    Computer simulations of orbital scenarios were performed to examine the effects of orbital altitude, equator crossing time, attitude uncertainty, and orbital eccentricity on ozone observations by future satellites. These effects were assessed by determining changes in solar and viewing geometry and earth daytime coverage loss. The importance of these changes on ozone retrieval was determined by simulating uncertainties in the TOMS ozone retrieval algorithm. The major findings are as follows: (1) Drift of equator crossing time from local noon would have the largest effect on the quality of ozone derived from TOMS. The most significant effect of this drift is the loss of earth daytime coverage in the winter hemisphere. The loss in coverage increases from 1 degree latitude for + or - 1 hour from noon, 6 degrees for + or - 3 hours from noon, to 53 degrees for + or - 6 hours from noon. An additional effect is the increase in ozone retrieval errors due to high solar zenith angles. (2) To maintain contiguous earth coverage, the maximum scan angle of the sensor must be increased with decreasing orbital altitude. The maximum scan angle required for full coverage at the equator varies from 60 degrees at 600 km altitude to 45 degrees at 1200 km. This produces an increase in spacecraft zenith angle, theta, which decreases the ozone retrieval accuracy. The range in theta was approximately 72 degrees for 600 km to approximately 57 degrees at 1200 km. (3) The effect of elliptical orbits is to create gaps in coverage along the subsatellite track. An elliptical orbit with a 200 km perigee and 1200 km apogee produced a maximum earth coverage gap of about 45 km at the perigee at nadir. (4) An attitude uncertainty of 0.1 degree in each axis (pitch, roll, yaw) produced a maximum scan angle to view the pole, and maximum solar zenith angle).

  4. Robustness Analysis and Optimally Robust Control Design via Sum-of-Squares

    NASA Technical Reports Server (NTRS)

    Dorobantu, Andrei; Crespo, Luis G.; Seiler, Peter J.

    2012-01-01

    A control analysis and design framework is proposed for systems subject to parametric uncertainty. The underlying strategies are based on sum-of-squares (SOS) polynomial analysis and nonlinear optimization to design an optimally robust controller. The approach determines a maximum uncertainty range for which the closed-loop system satisfies a set of stability and performance requirements. These requirements, de ned as inequality constraints on several metrics, are restricted to polynomial functions of the uncertainty. To quantify robustness, SOS analysis is used to prove that the closed-loop system complies with the requirements for a given uncertainty range. The maximum uncertainty range, calculated by assessing a sequence of increasingly larger ranges, serves as a robustness metric for the closed-loop system. To optimize the control design, nonlinear optimization is used to enlarge the maximum uncertainty range by tuning the controller gains. Hence, the resulting controller is optimally robust to parametric uncertainty. This approach balances the robustness margins corresponding to each requirement in order to maximize the aggregate system robustness. The proposed framework is applied to a simple linear short-period aircraft model with uncertain aerodynamic coefficients.

  5. Paleodust variability since the Last Glacial Maximum and implications for iron inputs to the ocean

    NASA Astrophysics Data System (ADS)

    Albani, S.; Mahowald, N. M.; Murphy, L. N.; Raiswell, R.; Moore, J. K.; Anderson, R. F.; McGee, D.; Bradtmiller, L. I.; Delmonte, B.; Hesse, P. P.; Mayewski, P. A.

    2016-04-01

    Changing climate conditions affect dust emissions and the global dust cycle, which in turn affects climate and biogeochemistry. In this study we use observationally constrained model reconstructions of the global dust cycle since the Last Glacial Maximum, combined with different simplified assumptions of atmospheric and sea ice processing of dust-borne iron, to provide estimates of soluble iron deposition to the oceans. For different climate conditions, we discuss uncertainties in model-based estimates of atmospheric processing and dust deposition to key oceanic regions, highlighting the large degree of uncertainty of this important variable for ocean biogeochemistry and the global carbon cycle. We also show the role of sea ice acting as a time buffer and processing agent, which results in a delayed and pulse-like soluble iron release into the ocean during the melting season, with monthly peaks up to ~17 Gg/month released into the Southern Oceans during the Last Glacial Maximum (LGM).

  6. Flood Hazard Mapping by Applying Fuzzy TOPSIS Method

    NASA Astrophysics Data System (ADS)

    Han, K. Y.; Lee, J. Y.; Keum, H.; Kim, B. J.; Kim, T. H.

    2017-12-01

    There are lots of technical methods to integrate various factors for flood hazard mapping. The purpose of this study is to suggest the methodology of integrated flood hazard mapping using MCDM(Multi Criteria Decision Making). MCDM problems involve a set of alternatives that are evaluated on the basis of conflicting and incommensurate criteria. In this study, to apply MCDM to assessing flood risk, maximum flood depth, maximum velocity, and maximum travel time are considered as criterion, and each applied elements are considered as alternatives. The scheme to find the efficient alternative closest to a ideal value is appropriate way to assess flood risk of a lot of element units(alternatives) based on various flood indices. Therefore, TOPSIS which is most commonly used MCDM scheme is adopted to create flood hazard map. The indices for flood hazard mapping(maximum flood depth, maximum velocity, and maximum travel time) have uncertainty concerning simulation results due to various values according to flood scenario and topographical condition. These kind of ambiguity of indices can cause uncertainty of flood hazard map. To consider ambiguity and uncertainty of criterion, fuzzy logic is introduced which is able to handle ambiguous expression. In this paper, we made Flood Hazard Map according to levee breach overflow using the Fuzzy TOPSIS Technique. We confirmed the areas where the highest grade of hazard was recorded through the drawn-up integrated flood hazard map, and then produced flood hazard map can be compared them with those indicated in the existing flood risk maps. Also, we expect that if we can apply the flood hazard map methodology suggested in this paper even to manufacturing the current flood risk maps, we will be able to make a new flood hazard map to even consider the priorities for hazard areas, including more varied and important information than ever before. Keywords : Flood hazard map; levee break analysis; 2D analysis; MCDM; Fuzzy TOPSIS Acknowlegement This research was supported by a grant (17AWMP-B079625-04) from Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  7. Uncertainties in real-world decisions on medical technologies.

    PubMed

    Lu, C Y

    2014-08-01

    Patients, clinicians, payers and policy makers face substantial uncertainties in their respective healthcare decisions as they attempt to achieve maximum value, or the greatest level of benefit possible at a given cost. Uncertainties largely come from incomplete information at the time that decisions must be made. This is true in all areas of medicine because evidence from clinical trials is often incongruent with real-world patient care. This article highlights key uncertainties around the (comparative) benefits and harms of medical technologies. Initiatives and strategies such as comparative effectiveness research and coverage with evidence development may help to generate reliable and relevant evidence for decisions on coverage and treatment. These efforts could result in better decisions that improve patient outcomes and better use of scarce medical resources. © 2014 John Wiley & Sons Ltd.

  8. Maximum warming occurs about one decade after carbon dioxide emission

    NASA Astrophysics Data System (ADS)

    Ricke, K.; Caldeira, K.

    2014-12-01

    There has been a long tradition of estimating the amount of climate change that would result from various carbon dioxide emission or concentration scenarios but there has been relatively little quantitative analysis of how long it takes to feel the consequences of an individual carbon dioxide emission. Using conjoined results of recent carbon-cycle and physical-climate model intercomparison projects, we find the median time between an emission and maximum warming is 10.1 years, with a 90% probability range of 6.6 to 30.7 years. We evaluate uncertainties in timing and amount of warming, partitioning them into three contributing factors: carbon cycle, climate sensitivity and ocean thermal inertia. To characterize the carbon cycle uncertainty associated with the global temperature response to a carbon dioxide emission today, we use fits to the time series of carbon dioxide concentrations from a CO2-impulse response function model intercomparison project's 15 ensemble members (1). To characterize both the uncertainty in climate sensitivity and in the thermal inertia of the climate system, we use fits to the time series of global temperature change from the Coupled Model Intercomparison Project phase 5 (CMIP5; 2) abrupt4xco2 experiment's 20 ensemble's members separating the effects of each uncertainty factors using one of two simple physical models for each CMIP5 climate model. This yields 6,000 possible combinations of these three factors using a standard convolution integral approach. Our results indicate that benefits of avoided climate damage from avoided CO2 emissions will be manifested within the lifetimes of people who acted to avoid that emission. While the relevant time lags imposed by the climate system are substantially shorter than a human lifetime, they are substantially longer than the typical political election cycle, making the delay and its associated uncertainties both economically and politically significant. References: 1. Joos F et al. (2013) Carbon dioxide and climate impulse response functions for the computation of greenhouse gas metrics: a multi-model analysis. Atmos Chem Phys 13:2793-2825. 2. Taylor KE, Stouffer RJ, Meehl GA (2011) An Overview of CMIP5 and the Experiment Design. Bull Am Meteorol Soc 93:485-498.

  9. Verification on the Dose Profile Variation of a 3-D—NIPAM Polymer Gel Dosimeter

    NASA Astrophysics Data System (ADS)

    Hsieh, Bor-Tsung; Wu, Jay; Chang, Yuan-Jen

    2013-04-01

    A gel dosimeter is a three-dimensional (3-D) device that is used in radiotherapy. It is more efficient than traditional one-dimensional and two-dimensional dosimeters because it can be used in complicated radiation therapy applications. However, the achievement of temporal and spatial stabilities for gel dosimeters remains challenging in clinical applications because the fabrication process affects the polymerization reaction during irradiation. This study investigated the dose profile variation of an N-isopropyl acrylamide (NIPAM) polymer gel dosimeter by using the 3-D optical computed tomography scanner OCTOPUSTM 10X (MGS Research Inc.). Two acrylic containers (diameter=10, height=10, and diameter=15, height=15cm ) filled with polymer gel (gelatin: 5%, NIPAM: 5%, Bis: 3%, THPC: 5 mM) were irradiated by using intensity-modulated radiotherapy (SIEMENS Oncor Impression, 6 MV Photo beam). The treatment field was a 3 cm 3 cm square field, and the prescribed dose was 5 Gy. The results of the reconstruction line profile showed that the uncertainty of non-irradiated gel is less than 1.3% when a container with 10 cm diameters cooled in a refrigerator with a water bath. The maximum uncertainties of the irradiated gel at 24 h, 48 h, and 72 h post-irradiation were 2.9%, 2.9%, and 3.1%, respectively. However, the maximum uncertainty of the non-irradiated gel dosimeter increased to 3% when a container with 15 cm diameter was cooled in the same refrigerator. After irradiation, the maximum uncertainties of the irradiated gel at 24 h, 48 h, and 72 h post-irradiation were 13.1%, 13.7%, and 12.95%, respectively. The uncertainty differences for gels at different container sizes were attributed to the different cooling rates that were applied to the gels. The time required for large gel containers to cool in the refrigerator was more than 10 h, whereas the cooling process only took 4.2 h for gels in a small container. The time difference produced different temperature histories for gels and may result in changes in gel sensitivity. Given the thermally induced pre-radiation polymerization, the time difference resulted in a deviation in dose profiles. This study reports that thermal control during gel preparation should be carefully performed for clinical applications to achieve a more accurate dose distribution in 3-D image reconstruction.

  10. Intercomparison and Uncertainty Assessment of Nine Evapotranspiration Estimates Over South America

    NASA Astrophysics Data System (ADS)

    Sörensson, Anna A.; Ruscica, Romina C.

    2018-04-01

    This study examines the uncertainties and the representations of anomalies of a set of evapotranspiration products over climatologically distinct regions of South America. The products, coming from land surface models, reanalysis, and remote sensing, are chosen from sources that are readily available to the community of users. The results show that the spatial patterns of maximum uncertainty differ among metrics, with dry regions showing maximum relative uncertainties of annual mean evapotranspiration, while energy-limited regions present maximum uncertainties in the representation of the annual cycle and monsoon regions in the representation of anomalous conditions. Furthermore, it is found that land surface models driven by observed atmospheric fields detect meteorological and agricultural droughts in dry regions unequivocally. The remote sensing products employed do not distinguish all agricultural droughts and this could be attributed to the forcing net radiation. The study also highlights important characteristics of individual data sets and recommends users to include assessments of sensitivity to evapotranspiration data sets in their studies, depending on region and nature of study to be conducted.

  11. Effects of variability in probable maximum precipitation patterns on flood losses

    NASA Astrophysics Data System (ADS)

    Zischg, Andreas Paul; Felder, Guido; Weingartner, Rolf; Quinn, Niall; Coxon, Gemma; Neal, Jeffrey; Freer, Jim; Bates, Paul

    2018-05-01

    The assessment of the impacts of extreme floods is important for dealing with residual risk, particularly for critical infrastructure management and for insurance purposes. Thus, modelling of the probable maximum flood (PMF) from probable maximum precipitation (PMP) by coupling hydrological and hydraulic models has gained interest in recent years. Herein, we examine whether variability in precipitation patterns exceeds or is below selected uncertainty factors in flood loss estimation and if the flood losses within a river basin are related to the probable maximum discharge at the basin outlet. We developed a model experiment with an ensemble of probable maximum precipitation scenarios created by Monte Carlo simulations. For each rainfall pattern, we computed the flood losses with a model chain and benchmarked the effects of variability in rainfall distribution with other model uncertainties. The results show that flood losses vary considerably within the river basin and depend on the timing and superimposition of the flood peaks from the basin's sub-catchments. In addition to the flood hazard component, the other components of flood risk, exposure, and vulnerability contribute remarkably to the overall variability. This leads to the conclusion that the estimation of the probable maximum expectable flood losses in a river basin should not be based exclusively on the PMF. Consequently, the basin-specific sensitivities to different precipitation patterns and the spatial organization of the settlements within the river basin need to be considered in the analyses of probable maximum flood losses.

  12. Calibrated intercepts for solar radiometers used in remote sensor calibration

    NASA Technical Reports Server (NTRS)

    Gellman, David I.; Biggar, Stuart F.; Slater, Philip N.; Bruegge, Carol J.

    1991-01-01

    Calibrated solar radiometer intercepts allow spectral optical depths to be determined for days with intermittently clear skies. This is of particular importance on satellite sensor calibration days that are cloudy except at the time of image acquisition. This paper describes the calibration of four solar radiometers using the Langley-Bouguer technique for data collected on days with a clear, stable atmosphere. Intercepts are determined with an uncertainty of less than six percent, corresponding to a maximum uncertainty of 0.06 in optical depth. The spread of voltage intercepts calculated in this process is carried through three methods of radiometric calibration of satellite sensors to yield an uncertainty in radiance at the top of the atmosphere of less than one percent associated with the uncertainty in solar radiometer intercepts for a range of ground reflectances.

  13. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    EPA Science Inventory

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  14. Two-point method uncertainty during control and measurement of cylindrical element diameters

    NASA Astrophysics Data System (ADS)

    Glukhov, V. I.; Shalay, V. V.; Radev, H.

    2018-04-01

    The topic of the article is devoted to the urgent problem of the reliability of technical products geometric specifications measurements. The purpose of the article is to improve the quality of parts linear sizes control by the two-point measurement method. The article task is to investigate methodical extended uncertainties in measuring cylindrical element linear sizes. The investigation method is a geometric modeling of the element surfaces shape and location deviations in a rectangular coordinate system. The studies were carried out for elements of various service use, taking into account their informativeness, corresponding to the kinematic pairs classes in theoretical mechanics and the number of constrained degrees of freedom in the datum element function. Cylindrical elements with informativity of 4, 2, 1 and θ (zero) were investigated. The uncertainties estimation of in two-point measurements was made by comparing the results of of linear dimensions measurements with the functional diameters maximum and minimum of the element material. Methodical uncertainty is formed when cylindrical elements with maximum informativeness have shape deviations of the cut and the curvature types. Methodical uncertainty is formed by measuring the element average size for all types of shape deviations. The two-point measurement method cannot take into account the location deviations of a dimensional element, so its use for elements with informativeness less than the maximum creates unacceptable methodical uncertainties in measurements of the maximum, minimum and medium linear dimensions. Similar methodical uncertainties also exist in the arbitration control of the linear dimensions of the cylindrical elements by limiting two-point gauges.

  15. In-flight thrust determination on a real-time basis

    NASA Technical Reports Server (NTRS)

    Ray, R. J.; Carpenter, T.; Sandlin, T.

    1984-01-01

    A real time computer program was implemented on a F-15 jet fighter to monitor in-flight engine performance of a Digital Electronic Engine Controlled (DEES) F-100 engine. The application of two gas generator methods to calculate in-flight thrust real time is described. A comparison was made between the actual results and those predicted by an engine model simulation. The percent difference between the two methods was compared to the predicted uncertainty based on instrumentation and model uncertainty and agreed closely with the results found during altitude facility testing. Data was obtained from acceleration runs of various altitudes at maximum power settings with and without afterburner. Real time in-flight thrust measurement was a major advancement to flight test productivity and was accomplished with no loss in accuracy over previous post flight methods.

  16. Brownian motion surviving in the unstable cubic potential and the role of Maxwell's demon

    NASA Astrophysics Data System (ADS)

    Ornigotti, Luca; Ryabov, Artem; Holubec, Viktor; Filip, Radim

    2018-03-01

    The trajectories of an overdamped particle in a highly unstable potential diverge so rapidly, that the variance of position grows much faster than its mean. A description of the dynamics by moments is therefore not informative. Instead, we propose and analyze local directly measurable characteristics, which overcome this limitation. We discuss the most probable particle position (position of the maximum of the probability density) and the local uncertainty in an unstable cubic potential, V (x ) ˜x3 , both in the transient regime and in the long-time limit. The maximum shifts against the acting force as a function of time and temperature. Simultaneously, the local uncertainty does not increase faster than the observable shift. In the long-time limit, the probability density naturally attains a quasistationary form. We interpret this process as a stabilization via the measurement-feedback mechanism, the Maxwell demon, which works as an entropy pump. The rules for measurement and feedback naturally arise from the basic properties of the unstable dynamics. All reported effects are inherent in any unstable system. Their detailed understanding will stimulate the development of stochastic engines and amplifiers and, later, their quantum counterparts.

  17. Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS

    DOE PAGES

    Brown, C. S.; Zhang, Hongbin

    2016-05-24

    Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less

  18. Uncertainties in estimating heart doses from 2D-tangential breast cancer radiotherapy.

    PubMed

    Lorenzen, Ebbe L; Brink, Carsten; Taylor, Carolyn W; Darby, Sarah C; Ewertz, Marianne

    2016-04-01

    We evaluated the accuracy of three methods of estimating radiation dose to the heart from two-dimensional tangential radiotherapy for breast cancer, as used in Denmark during 1982-2002. Three tangential radiotherapy regimens were reconstructed using CT-based planning scans for 40 patients with left-sided and 10 with right-sided breast cancer. Setup errors and organ motion were simulated using estimated uncertainties. For left-sided patients, mean heart dose was related to maximum heart distance in the medial field. For left-sided breast cancer, mean heart dose estimated from individual CT-scans varied from <1Gy to >8Gy, and maximum dose from 5 to 50Gy for all three regimens, so that estimates based only on regimen had substantial uncertainty. When maximum heart distance was taken into account, the uncertainty was reduced and was comparable to the uncertainty of estimates based on individual CT-scans. For right-sided breast cancer patients, mean heart dose based on individual CT-scans was always <1Gy and maximum dose always <5Gy for all three regimens. The use of stored individual simulator films provides a method for estimating heart doses in left-tangential radiotherapy for breast cancer that is almost as accurate as estimates based on individual CT-scans. Copyright © 2016. Published by Elsevier Ireland Ltd.

  19. A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters

    NASA Astrophysics Data System (ADS)

    Ren, Luchuan

    2015-04-01

    A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters Luchuan Ren, Jianwei Tian, Mingli Hong Institute of Disaster Prevention, Sanhe, Heibei Province, 065201, P.R. China It is obvious that the uncertainties of the maximum tsunami wave heights in offshore area are partly from uncertainties of the potential seismic tsunami source parameters. A global sensitivity analysis method on the maximum tsunami wave heights to the potential seismic source parameters is put forward in this paper. The tsunami wave heights are calculated by COMCOT ( the Cornell Multi-grid Coupled Tsunami Model), on the assumption that an earthquake with magnitude MW8.0 occurred at the northern fault segment along the Manila Trench and triggered a tsunami in the South China Sea. We select the simulated results of maximum tsunami wave heights at specific sites in offshore area to verify the validity of the method proposed in this paper. For ranking importance order of the uncertainties of potential seismic source parameters (the earthquake's magnitude, the focal depth, the strike angle, dip angle and slip angle etc..) in generating uncertainties of the maximum tsunami wave heights, we chose Morris method to analyze the sensitivity of the maximum tsunami wave heights to the aforementioned parameters, and give several qualitative descriptions of nonlinear or linear effects of them on the maximum tsunami wave heights. We quantitatively analyze the sensitivity of the maximum tsunami wave heights to these parameters and the interaction effects among these parameters on the maximum tsunami wave heights by means of the extended FAST method afterward. The results shows that the maximum tsunami wave heights are very sensitive to the earthquake magnitude, followed successively by the epicenter location, the strike angle and dip angle, the interactions effect between the sensitive parameters are very obvious at specific site in offshore area, and there exist differences in importance order in generating uncertainties of the maximum tsunami wave heights for same group parameters at different specific sites in offshore area. These results are helpful to deeply understand the relationship between the tsunami wave heights and the seismic tsunami source parameters. Keywords: Global sensitivity analysis; Tsunami wave height; Potential seismic tsunami source parameter; Morris method; Extended FAST method

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less

  1. Parameter uncertainty analysis of a biokinetic model of caesium

    DOE PAGES

    Li, W. B.; Klein, W.; Blanchardon, Eric; ...

    2014-04-17

    Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects atmore » different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5th and 2.5th percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS.« less

  2. Determining the Uncertainty of X-Ray Absorption Measurements

    PubMed Central

    Wojcik, Gary S.

    2004-01-01

    X-ray absorption (or more properly, x-ray attenuation) techniques have been applied to study the moisture movement in and moisture content of materials like cement paste, mortar, and wood. An increase in the number of x-ray counts with time at a location in a specimen may indicate a decrease in moisture content. The uncertainty of measurements from an x-ray absorption system, which must be known to properly interpret the data, is often assumed to be the square root of the number of counts, as in a Poisson process. No detailed studies have heretofore been conducted to determine the uncertainty of x-ray absorption measurements or the effect of averaging data on the uncertainty. In this study, the Poisson estimate was found to adequately approximate normalized root mean square errors (a measure of uncertainty) of counts for point measurements and profile measurements of water specimens. The Poisson estimate, however, was not reliable in approximating the magnitude of the uncertainty when averaging data from paste and mortar specimens. Changes in uncertainty from differing averaging procedures were well-approximated by a Poisson process. The normalized root mean square errors decreased when the x-ray source intensity, integration time, collimator size, and number of scanning repetitions increased. Uncertainties in mean paste and mortar count profiles were kept below 2 % by averaging vertical profiles at horizontal spacings of 1 mm or larger with counts per point above 4000. Maximum normalized root mean square errors did not exceed 10 % in any of the tests conducted. PMID:27366627

  3. Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoneking, M.R.; Den Hartog, D.J.

    1996-06-01

    The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimatesmore » for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luis, Alfredo

    We show within a very simple framework that different measures of fluctuations lead to uncertainty relations resulting in contradictory conclusions. More specifically we focus on Tsallis and Renyi entropic uncertainty relations and we get that the minimum joint uncertainty states for some fluctuation measures are the maximum joint uncertainty states of other fluctuation measures, and vice versa.

  5. Estimating rate uncertainty with maximum likelihood: differences between power-law and flicker–random-walk models

    USGS Publications Warehouse

    Langbein, John O.

    2012-01-01

    Recent studies have documented that global positioning system (GPS) time series of position estimates have temporal correlations which have been modeled as a combination of power-law and white noise processes. When estimating quantities such as a constant rate from GPS time series data, the estimated uncertainties on these quantities are more realistic when using a noise model that includes temporal correlations than simply assuming temporally uncorrelated noise. However, the choice of the specific representation of correlated noise can affect the estimate of uncertainty. For many GPS time series, the background noise can be represented by either: (1) a sum of flicker and random-walk noise or, (2) as a power-law noise model that represents an average of the flicker and random-walk noise. For instance, if the underlying noise model is a combination of flicker and random-walk noise, then incorrectly choosing the power-law model could underestimate the rate uncertainty by a factor of two. Distinguishing between the two alternate noise models is difficult since the flicker component can dominate the assessment of the noise properties because it is spread over a significant portion of the measurable frequency band. But, although not necessarily detectable, the random-walk component can be a major constituent of the estimated rate uncertainty. None the less, it is possible to determine the upper bound on the random-walk noise.

  6. Pretest uncertainty analysis for chemical rocket engine tests

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth J.

    1987-01-01

    A parametric pretest uncertainty analysis has been performed for a chemical rocket engine test at a unique 1000:1 area ratio altitude test facility. Results from the parametric study provide the error limits required in order to maintain a maximum uncertainty of 1 percent on specific impulse. Equations used in the uncertainty analysis are presented.

  7. An ensemble-based dynamic Bayesian averaging approach for discharge simulations using multiple global precipitation products and hydrological models

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris

    2018-03-01

    Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.

  8. Predicting the magnetic vectors within coronal mass ejections arriving at Earth: 2. Geomagnetic response

    NASA Astrophysics Data System (ADS)

    Savani, N. P.; Vourlidas, A.; Richardson, I. G.; Szabo, A.; Thompson, B. J.; Pulkkinen, A.; Mays, M. L.; Nieves-Chinchilla, T.; Bothmer, V.

    2017-02-01

    This is a companion to Savani et al. (2015) that discussed how a first-order prediction of the internal magnetic field of a coronal mass ejection (CME) may be made from observations of its initial state at the Sun for space weather forecasting purposes (Bothmer-Schwenn scheme (BSS) model). For eight CME events, we investigate how uncertainties in their predicted magnetic structure influence predictions of the geomagnetic activity. We use an empirical relationship between the solar wind plasma drivers and Kp index together with the inferred magnetic vectors, to make a prediction of the time variation of Kp (Kp(BSS)). We find a 2σ uncertainty range on the magnetic field magnitude (|B|) provides a practical and convenient solution for predicting the uncertainty in geomagnetic storm strength. We also find the estimated CME velocity is a major source of error in the predicted maximum Kp. The time variation of Kp(BSS) is important for predicting periods of enhanced and maximum geomagnetic activity, driven by southerly directed magnetic fields, and periods of lower activity driven by northerly directed magnetic field. We compare the skill score of our model to a number of other forecasting models, including the NOAA/Space Weather Prediction Center (SWPC) and Community Coordinated Modeling Center (CCMC)/SWRC estimates. The BSS model was the most unbiased prediction model, while the other models predominately tended to significantly overforecast. The True skill score of the BSS prediction model (TSS = 0.43 ± 0.06) exceeds the results of two baseline models and the NOAA/SWPC forecast. The BSS model prediction performed equally with CCMC/SWRC predictions while demonstrating a lower uncertainty.

  9. A measure of uncertainty regarding the interval constraint of normal mean elicited by two stages of a prior hierarchy.

    PubMed

    Kim, Hea-Jung

    2014-01-01

    This paper considers a hierarchical screened Gaussian model (HSGM) for Bayesian inference of normal models when an interval constraint in the mean parameter space needs to be incorporated in the modeling but when such a restriction is uncertain. An objective measure of the uncertainty, regarding the interval constraint, accounted for by using the HSGM is proposed for the Bayesian inference. For this purpose, we drive a maximum entropy prior of the normal mean, eliciting the uncertainty regarding the interval constraint, and then obtain the uncertainty measure by considering the relationship between the maximum entropy prior and the marginal prior of the normal mean in HSGM. Bayesian estimation procedure of HSGM is developed and two numerical illustrations pertaining to the properties of the uncertainty measure are provided.

  10. How does uncertainty shape patient experience in advanced illness? A secondary analysis of qualitative data.

    PubMed

    Etkind, Simon Noah; Bristowe, Katherine; Bailey, Katharine; Selman, Lucy Ellen; Murtagh, Fliss Em

    2017-02-01

    Uncertainty is common in advanced illness but is infrequently studied in this context. If poorly addressed, uncertainty can lead to adverse patient outcomes. We aimed to understand patient experiences of uncertainty in advanced illness and develop a typology of patients' responses and preferences to inform practice. Secondary analysis of qualitative interview transcripts. Studies were assessed for inclusion and interviews were sampled using maximum-variation sampling. Analysis used a thematic approach with 10% of coding cross-checked to enhance reliability. Qualitative interviews from six studies including patients with heart failure, chronic obstructive pulmonary disease, renal disease, cancer and liver failure. A total of 30 transcripts were analysed. Median age was 75 (range, 43-95), 12 patients were women. The impact of uncertainty was frequently discussed: the main related themes were engagement with illness, information needs, patient priorities and the period of time that patients mainly focused their attention on (temporal focus). A typology of patient responses to uncertainty was developed from these themes. Uncertainty influences patient experience in advanced illness through affecting patients' information needs, preferences and future priorities for care. Our typology aids understanding of how patients with advanced illness respond to uncertainty. Assessment of these three factors may be a useful starting point to guide clinical assessment and shared decision making.

  11. Uncertainty quantification of CO₂ saturation estimated from electrical resistance tomography data at the Cranfield site

    DOE PAGES

    Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; ...

    2014-06-03

    A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the priormore » information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a laptop or desktop PC.« less

  12. Verification of the Uncertainty Principle by Using Diffraction of Light Waves

    ERIC Educational Resources Information Center

    Nikolic, D.; Nesic, Lj

    2011-01-01

    We described a simple idea for experimental verification of the uncertainty principle for light waves. We used a single-slit diffraction of a laser beam for measuring the angular width of zero-order diffraction maximum and obtained the corresponding wave number uncertainty. We will assume that the uncertainty in position is the slit width. For the…

  13. Critical Analysis of Dual-Probe Heat-Pulse Technique Applied to Measuring Thermal Diffusivity

    NASA Astrophysics Data System (ADS)

    Bovesecchi, G.; Coppa, P.; Corasaniti, S.; Potenza, M.

    2018-07-01

    The paper presents an analysis of the experimental parameters involved in application of the dual-probe heat pulse technique, followed by a critical review of methods for processing thermal response data (e.g., maximum detection and nonlinear least square regression) and the consequent obtainable uncertainty. Glycerol was selected as testing liquid, and its thermal diffusivity was evaluated over the temperature range from - 20 °C to 60 °C. In addition, Monte Carlo simulation was used to assess the uncertainty propagation for maximum detection. It was concluded that maximum detection approach to process thermal response data gives the closest results to the reference data inasmuch nonlinear regression results are affected by major uncertainties due to partial correlation between the evaluated parameters. Besides, the interpolation of temperature data with a polynomial to find the maximum leads to a systematic difference between measured and reference data, as put into evidence by the Monte Carlo simulations; through its correction, this systematic error can be reduced to a negligible value, about 0.8 %.

  14. Bayesian source tracking via focalization and marginalization in an uncertain Mediterranean Sea environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J; Nielsen, Peter L

    2010-07-01

    This paper applies Bayesian source tracking in an uncertain environment to Mediterranean Sea data, and investigates the resulting tracks and track uncertainties as a function of data information content (number of data time-segments, number of frequencies, and signal-to-noise ratio) and of prior information (environmental uncertainties and source-velocity constraints). To track low-level sources, acoustic data recorded for multiple time segments (corresponding to multiple source positions along the track) are inverted simultaneously. Environmental uncertainty is addressed by including unknown water-column and seabed properties as nuisance parameters in an augmented inversion. Two approaches are considered: Focalization-tracking maximizes the posterior probability density (PPD) over the unknown source and environmental parameters. Marginalization-tracking integrates the PPD over environmental parameters to obtain a sequence of joint marginal probability distributions over source coordinates, from which the most-probable track and track uncertainties can be extracted. Both approaches apply track constraints on the maximum allowable vertical and radial source velocity. The two approaches are applied for towed-source acoustic data recorded at a vertical line array at a shallow-water test site in the Mediterranean Sea where previous geoacoustic studies have been carried out.

  15. Characterizing the marker-dye correction for Gafchromic(®) EBT2 film: a comparison of three analysis methods.

    PubMed

    McCaw, Travis J; Micka, John A; Dewerd, Larry A

    2011-10-01

    Gafchromic(®) EBT2 film has a yellow marker dye incorporated into the active layer of the film that can be used to correct the film response for small variations in thickness. This work characterizes the effect of the marker-dye correction on the uniformity and uncertainty of dose measurements with EBT2 film. The effect of variations in time postexposure on the uniformity of EBT2 is also investigated. EBT2 films were used to measure the flatness of a (60)Co field to provide a high-spatial resolution evaluation of the film uniformity. As a reference, the flatness of the (60)Co field was also measured with Kodak EDR2 films. The EBT2 films were digitized with a flatbed document scanner 24, 48, and 72 h postexposure, and the images were analyzed using three methods: (1) the manufacturer-recommended marker-dye correction, (2) an in-house marker-dye correction, and (3) a net optical density (OD) measurement in the red color channel. The field flatness was calculated from orthogonal profiles through the center of the field using each analysis method, and the results were compared with the EDR2 measurements. Uncertainty was propagated through a dose calculation for each analysis method. The change in the measured field flatness for increasing times postexposure was also determined. Both marker-dye correction methods improved the field flatness measured with EBT2 film relative to the net OD method, with a maximum improvement of 1% using the manufacturer-recommended correction. However, the manufacturer-recommended correction also resulted in a dose uncertainty an order of magnitude greater than the other two methods. The in-house marker-dye correction lowered the dose uncertainty relative to the net OD method. The measured field flatness did not exhibit any unidirectional change with increasing time postexposure and showed a maximum change of 0.3%. The marker dye in EBT2 can be used to improve the response uniformity of the film. Depending on the film analysis method used, however, application of a marker-dye correction can improve or degrade the dose uncertainty relative to the net OD method. The uniformity of EBT2 was found to be independent of the time postexposure.

  16. The Impact of Reconstruction Methods, Phylogenetic Uncertainty and Branch Lengths on Inference of Chromosome Number Evolution in American Daisies (Melampodium, Asteraceae)

    PubMed Central

    McCann, Jamie; Stuessy, Tod F.; Villaseñor, Jose L.; Weiss-Schneeweiss, Hanna

    2016-01-01

    Chromosome number change (polyploidy and dysploidy) plays an important role in plant diversification and speciation. Investigating chromosome number evolution commonly entails ancestral state reconstruction performed within a phylogenetic framework, which is, however, prone to uncertainty, whose effects on evolutionary inferences are insufficiently understood. Using the chromosomally diverse plant genus Melampodium (Asteraceae) as model group, we assess the impact of reconstruction method (maximum parsimony, maximum likelihood, Bayesian methods), branch length model (phylograms versus chronograms) and phylogenetic uncertainty (topological and branch length uncertainty) on the inference of chromosome number evolution. We also address the suitability of the maximum clade credibility (MCC) tree as single representative topology for chromosome number reconstruction. Each of the listed factors causes considerable incongruence among chromosome number reconstructions. Discrepancies between inferences on the MCC tree from those made by integrating over a set of trees are moderate for ancestral chromosome numbers, but severe for the difference of chromosome gains and losses, a measure of the directionality of dysploidy. Therefore, reliance on single trees, such as the MCC tree, is strongly discouraged and model averaging, taking both phylogenetic and model uncertainty into account, is recommended. For studying chromosome number evolution, dedicated models implemented in the program ChromEvol and ordered maximum parsimony may be most appropriate. Chromosome number evolution in Melampodium follows a pattern of bidirectional dysploidy (starting from x = 11 to x = 9 and x = 14, respectively) with no prevailing direction. PMID:27611687

  17. The Impact of Reconstruction Methods, Phylogenetic Uncertainty and Branch Lengths on Inference of Chromosome Number Evolution in American Daisies (Melampodium, Asteraceae).

    PubMed

    McCann, Jamie; Schneeweiss, Gerald M; Stuessy, Tod F; Villaseñor, Jose L; Weiss-Schneeweiss, Hanna

    2016-01-01

    Chromosome number change (polyploidy and dysploidy) plays an important role in plant diversification and speciation. Investigating chromosome number evolution commonly entails ancestral state reconstruction performed within a phylogenetic framework, which is, however, prone to uncertainty, whose effects on evolutionary inferences are insufficiently understood. Using the chromosomally diverse plant genus Melampodium (Asteraceae) as model group, we assess the impact of reconstruction method (maximum parsimony, maximum likelihood, Bayesian methods), branch length model (phylograms versus chronograms) and phylogenetic uncertainty (topological and branch length uncertainty) on the inference of chromosome number evolution. We also address the suitability of the maximum clade credibility (MCC) tree as single representative topology for chromosome number reconstruction. Each of the listed factors causes considerable incongruence among chromosome number reconstructions. Discrepancies between inferences on the MCC tree from those made by integrating over a set of trees are moderate for ancestral chromosome numbers, but severe for the difference of chromosome gains and losses, a measure of the directionality of dysploidy. Therefore, reliance on single trees, such as the MCC tree, is strongly discouraged and model averaging, taking both phylogenetic and model uncertainty into account, is recommended. For studying chromosome number evolution, dedicated models implemented in the program ChromEvol and ordered maximum parsimony may be most appropriate. Chromosome number evolution in Melampodium follows a pattern of bidirectional dysploidy (starting from x = 11 to x = 9 and x = 14, respectively) with no prevailing direction.

  18. Maximum Entropy/Optimal Projection (MEOP) control design synthesis: Optimal quantification of the major design tradeoffs

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.; Bernstein, D. S.

    1987-01-01

    The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated.

  19. Role of turbulence fluctuations on uncertainties of acoutic Doppler current profiler discharge measurements

    USGS Publications Warehouse

    Tarrab, Leticia; Garcia, Carlos M.; Cantero, Mariano I.; Oberg, Kevin

    2012-01-01

    This work presents a systematic analysis quantifying the role of the presence of turbulence fluctuations on uncertainties (random errors) of acoustic Doppler current profiler (ADCP) discharge measurements from moving platforms. Data sets of three-dimensional flow velocities with high temporal and spatial resolution were generated from direct numerical simulation (DNS) of turbulent open channel flow. Dimensionless functions relating parameters quantifying the uncertainty in discharge measurements due to flow turbulence (relative variance and relative maximum random error) to sampling configuration were developed from the DNS simulations and then validated with field-scale discharge measurements. The validated functions were used to evaluate the role of the presence of flow turbulence fluctuations on uncertainties in ADCP discharge measurements. The results of this work indicate that random errors due to the flow turbulence are significant when: (a) a low number of transects is used for a discharge measurement, and (b) measurements are made in shallow rivers using high boat velocity (short time for the boat to cross a flow turbulence structure).

  20. Probabilistic models in human sensorimotor control

    PubMed Central

    Wolpert, Daniel M.

    2009-01-01

    Sensory and motor uncertainty form a fundamental constraint on human sensorimotor control. Bayesian decision theory (BDT) has emerged as a unifying framework to understand how the central nervous system performs optimal estimation and control in the face of such uncertainty. BDT has two components: Bayesian statistics and decision theory. Here we review Bayesian statistics and show how it applies to estimating the state of the world and our own body. Recent results suggest that when learning novel tasks we are able to learn the statistical properties of both the world and our own sensory apparatus so as to perform estimation using Bayesian statistics. We review studies which suggest that humans can combine multiple sources of information to form maximum likelihood estimates, can incorporate prior beliefs about possible states of the world so as to generate maximum a posteriori estimates and can use Kalman filter-based processes to estimate time-varying states. Finally, we review Bayesian decision theory in motor control and how the central nervous system processes errors to determine loss functions and optimal actions. We review results that suggest we plan movements based on statistics of our actions that result from signal-dependent noise on our motor outputs. Taken together these studies provide a statistical framework for how the motor system performs in the presence of uncertainty. PMID:17628731

  1. Spatiotemporal analysis and mapping of oral cancer risk in changhua county (taiwan): an application of generalized bayesian maximum entropy method.

    PubMed

    Yu, Hwa-Lung; Chiang, Chi-Ting; Lin, Shu-De; Chang, Tsun-Kuo

    2010-02-01

    Incidence rate of oral cancer in Changhua County is the highest among the 23 counties of Taiwan during 2001. However, in health data analysis, crude or adjusted incidence rates of a rare event (e.g., cancer) for small populations often exhibit high variances and are, thus, less reliable. We proposed a generalized Bayesian Maximum Entropy (GBME) analysis of spatiotemporal disease mapping under conditions of considerable data uncertainty. GBME was used to study the oral cancer population incidence in Changhua County (Taiwan). Methodologically, GBME is based on an epistematics principles framework and generates spatiotemporal estimates of oral cancer incidence rates. In a way, it accounts for the multi-sourced uncertainty of rates, including small population effects, and the composite space-time dependence of rare events in terms of an extended Poisson-based semivariogram. The results showed that GBME analysis alleviates the noises of oral cancer data from population size effect. Comparing to the raw incidence data, the maps of GBME-estimated results can identify high risk oral cancer regions in Changhua County, where the prevalence of betel quid chewing and cigarette smoking is relatively higher than the rest of the areas. GBME method is a valuable tool for spatiotemporal disease mapping under conditions of uncertainty. 2010 Elsevier Inc. All rights reserved.

  2. Developing an Event-Tree Probabilistic Tsunami Inundation Model for NE Atlantic Coasts: Application to a Case Study

    NASA Astrophysics Data System (ADS)

    Omira, R.; Matias, L.; Baptista, M. A.

    2016-12-01

    This study constitutes a preliminary assessment of probabilistic tsunami inundation in the NE Atlantic region. We developed an event-tree approach to calculate the likelihood of tsunami flood occurrence and exceedance of a specific near-shore wave height for a given exposure time. Only tsunamis of tectonic origin are considered here, taking into account local, regional, and far-field sources. The approach used here consists of an event-tree method that gathers probability models for seismic sources, tsunami numerical modeling, and statistical methods. It also includes a treatment of aleatoric uncertainties related to source location and tidal stage. Epistemic uncertainties are not addressed in this study. The methodology is applied to the coastal test-site of Sines located in the NE Atlantic coast of Portugal. We derive probabilistic high-resolution maximum wave amplitudes and flood distributions for the study test-site considering 100- and 500-year exposure times. We find that the probability that maximum wave amplitude exceeds 1 m somewhere along the Sines coasts reaches about 60 % for an exposure time of 100 years and is up to 97 % for an exposure time of 500 years. The probability of inundation occurrence (flow depth >0 m) varies between 10 % and 57 %, and from 20 % up to 95 % for 100- and 500-year exposure times, respectively. No validation has been performed here with historical tsunamis. This paper illustrates a methodology through a case study, which is not an operational assessment.

  3. What drives uncertainty in model diagnoses of carbon dynamics in southern US forests: climate, vegetation, disturbance, or model parameters?

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Gu, H.; Williams, C. A.

    2017-12-01

    Results from terrestrial carbon cycle models have multiple sources of uncertainty, each with its behavior and range. Their relative importance and how they combine has received little attention. This study investigates how various sources of uncertainty propagate, temporally and spatially, in CASA-Disturbance (CASA-D). CASA-D simulates the impact of climatic forcing and disturbance legacies on forest carbon dynamics with the following steps. Firstly, we infer annual growth and mortality rates from measured biomass stocks (FIA) over time and disturbance (e.g., fire, harvest, bark beetle) to represent annual post-disturbance carbon fluxes trajectories across forest types and site productivity settings. Then, annual carbon fluxes are estimated from these trajectories by using time since disturbance which is inferred from biomass (NBCD 2000) and disturbance maps (NAFD, MTBS and ADS). Finally, we apply monthly climatic scalars derived from default CASA to temporally distribute annual carbon fluxes to each month. This study assesses carbon flux uncertainty from two sources: driving data including climatic and forest biomass inputs, and three most sensitive parameters in CASA-D including maximum light use efficiency, temperature sensitivity of soil respiration (Q10) and optimum temperature identified by using EFAST (Extended Fourier Amplitude Sensitivity Testing). We quantify model uncertainties from each, and report their relative importance in estimating forest carbon sink/source in southeast United States from 2003 to 2010.

  4. Thermospheric mass density model error variance as a function of time scale

    NASA Astrophysics Data System (ADS)

    Emmert, J. T.; Sutton, E. K.

    2017-12-01

    In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).

  5. Cosmic rays at the ankle: Composition studies using the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Younk, Patrick William

    The ankle is a flattening of the cosmic ray energy spectrum at approximately 10 18.5 eV. Its origin is unknown. This thesis investigates the nature of cosmic rays with energy near 10 18.5 eV, and it evaluates two phenomenological models for the ankle feature. Data from the Pierre Auger Observatory is used. Two important calibration studies for the Pierre Auger Observatory are presented: (1) A measurement of the time offset between the surface detector and the fluorescence detector, and (2) A measurement of the fluorescence telescope alignment. The uncertainty on the time offset measurement is 20 ns and the uncertainty on the fluorescence telescope alignment is 0.14°; both uncertainties are within the design specifications of the observatory. Studies to determine the cosmic ray composition mixture near the ankle are presented. Measurements of the average depth of shower maximum suggest that the average particle mass is gradually decreasing between 10 17.8 and 10 18.4 eV and that the average particle mass is steady or slightly increasing between 10 18.5 and 10 19.0 eV. Measurements of the average depth of shower maximum also suggest that the fractional abundance of intermediate weight nuclei such as carbon steadily increases from 10 18 to 10 19 eV. Between 10 18.5 and 10 19.0 eV, the correlation between the depth of shower maximum and the ground level muon density is consistent with a significant fractional abundance of both protons and intermediate weight nuclei. Two popular phenomenological models for the ankle are compared with the above composition results. The first model is that the ankle marks the intersection between a soft galactic spectrum and a hard extragalactic spectrum. The second model is that the ankle is part of a dip in the cosmic ray spectrum (the pair production dip) caused by the attenuation of protons as they travel through intergalactic space. It is demonstrated that the experimental results favor the first model.

  6. A fast algorithm to compute precise type-2 centroids for real-time control applications.

    PubMed

    Chakraborty, Sumantra; Konar, Amit; Ralescu, Anca; Pal, Nikhil R

    2015-02-01

    An interval type-2 fuzzy set (IT2 FS) is characterized by its upper and lower membership functions containing all possible embedded fuzzy sets, which together is referred to as the footprint of uncertainty (FOU). The FOU results in a span of uncertainty measured in the defuzzified space and is determined by the positional difference of the centroids of all the embedded fuzzy sets taken together. This paper provides a closed-form formula to evaluate the span of uncertainty of an IT2 FS. The closed-form formula offers a precise measurement of the degree of uncertainty in an IT2 FS with a runtime complexity less than that of the classical iterative Karnik-Mendel algorithm and other formulations employing the iterative Newton-Raphson algorithm. This paper also demonstrates a real-time control application using the proposed closed-form formula of centroids with reduced root mean square error and computational overhead than those of the existing methods. Computer simulations for this real-time control application indicate that parallel realization of the IT2 defuzzification outperforms its competitors with respect to maximum overshoot even at high sampling rates. Furthermore, in the presence of measurement noise in system (plant) states, the proposed IT2 FS based scheme outperforms its type-1 counterpart with respect to peak overshoot and root mean square error in plant response.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, C. S.; Zhang, Hongbin

    Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less

  8. Uncertainty, Sensitivity Analysis, and Causal Identification in the Arctic using a Perturbed Parameter Ensemble of the HiLAT Climate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunke, Elizabeth Clare; Urrego Blanco, Jorge Rolando; Urban, Nathan Mark

    Coupled climate models have a large number of input parameters that can affect output uncertainty. We conducted a sensitivity analysis of sea ice proper:es and Arc:c related climate variables to 5 parameters in the HiLAT climate model: air-ocean turbulent exchange parameter (C), conversion of water vapor to clouds (cldfrc_rhminl) and of ice crystals to snow (micro_mg_dcs), snow thermal conduc:vity (ksno), and maximum snow grain size (rsnw_mlt). We used an elementary effect (EE) approach to rank their importance for output uncertainty. EE is an extension of one-at-a-time sensitivity analyses, but it is more efficient in sampling multi-dimensional parameter spaces. We lookedmore » for emerging relationships among climate variables across the model ensemble, and used causal discovery algorithms to establish potential pathways for those relationships.« less

  9. Interhemispheric ice-sheet synchronicity during the last glacial maximum

    USGS Publications Warehouse

    Weber, Michael E.; Clark, Peter U.; Ricken, Werner; Mitrovica, Jerry X.; Hostetler, Steven W.; Kuhn, Gerhard

    2011-01-01

    The timing of the last maximum extent of the Antarctic ice sheets relative to those in the Northern Hemisphere remains poorly understood. We develop a chronology for the Weddell Sea sector of the East Antarctic Ice Sheet that, combined with ages from other Antarctic ice-sheet sectors, indicates that the advance to and retreat from their maximum extent was within dating uncertainties synchronous with most sectors of Northern Hemisphere ice sheets. Surface climate forcing of Antarctic mass balance would probably cause an opposite response, whereby a warming climate would increase accumulation but not surface melting. Our new data support teleconnections involving sea-level forcing from Northern Hemisphere ice sheets and changes in North Atlantic deep-water formation and attendant heat flux to Antarctic grounding lines to synchronize the hemispheric ice sheets.

  10. Interhemispheric ice-sheet synchronicity during the Last Glacial Maximum.

    PubMed

    Weber, Michael E; Clark, Peter U; Ricken, Werner; Mitrovica, Jerry X; Hostetler, Steven W; Kuhn, Gerhard

    2011-12-02

    The timing of the last maximum extent of the Antarctic ice sheets relative to those in the Northern Hemisphere remains poorly understood. We develop a chronology for the Weddell Sea sector of the East Antarctic Ice Sheet that, combined with ages from other Antarctic ice-sheet sectors, indicates that the advance to and retreat from their maximum extent was within dating uncertainties synchronous with most sectors of Northern Hemisphere ice sheets. Surface climate forcing of Antarctic mass balance would probably cause an opposite response, whereby a warming climate would increase accumulation but not surface melting. Our new data support teleconnections involving sea-level forcing from Northern Hemisphere ice sheets and changes in North Atlantic deep-water formation and attendant heat flux to Antarctic grounding lines to synchronize the hemispheric ice sheets.

  11. Generalized uncertainty principle and the maximum mass of ideal white dwarfs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rashidi, Reza, E-mail: reza.rashidi@srttu.edu

    The effects of a generalized uncertainty principle on the structure of an ideal white dwarf star is investigated. The equation describing the equilibrium configuration of the star is a generalized form of the Lane–Emden equation. It is proved that the star always has a finite size. It is then argued that the maximum mass of such an ideal white dwarf tends to infinity, as opposed to the conventional case where it has a finite value.

  12. Phase correction and error estimation in InSAR time series analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Fattahi, H.; Amelung, F.

    2017-12-01

    During the last decade several InSAR time series approaches have been developed in response to the non-idea acquisition strategy of SAR satellites, such as large spatial and temporal baseline with non-regular acquisitions. The small baseline tubes and regular acquisitions of new SAR satellites such as Sentinel-1 allows us to form fully connected networks of interferograms and simplifies the time series analysis into a weighted least square inversion of an over-determined system. Such robust inversion allows us to focus more on the understanding of different components in InSAR time-series and its uncertainties. We present an open-source python-based package for InSAR time series analysis, called PySAR (https://yunjunz.github.io/PySAR/), with unique functionalities for obtaining unbiased ground displacement time-series, geometrical and atmospheric correction of InSAR data and quantifying the InSAR uncertainty. Our implemented strategy contains several features including: 1) improved spatial coverage using coherence-based network of interferograms, 2) unwrapping error correction using phase closure or bridging, 3) tropospheric delay correction using weather models and empirical approaches, 4) DEM error correction, 5) optimal selection of reference date and automatic outlier detection, 6) InSAR uncertainty due to the residual tropospheric delay, decorrelation and residual DEM error, and 7) variance-covariance matrix of final products for geodetic inversion. We demonstrate the performance using SAR datasets acquired by Cosmo-Skymed and TerraSAR-X, Sentinel-1 and ALOS/ALOS-2, with application on the highly non-linear volcanic deformation in Japan and Ecuador (figure 1). Our result shows precursory deformation before the 2015 eruptions of Cotopaxi volcano, with a maximum uplift of 3.4 cm on the western flank (fig. 1b), with a standard deviation of 0.9 cm (fig. 1a), supporting the finding by Morales-Rivera et al. (2017, GRL); and a post-eruptive subsidence on the same area, with a maximum of -3 +/- 0.9 cm (fig. 1c). Time-series displacement map (fig. 2) shows a highly non-linear deformation behavior, indicating the complicated magma propagation process during this eruption cycle.

  13. Deciphering the evolution of the last Eurasian ice sheets

    NASA Astrophysics Data System (ADS)

    Hughes, Anna; Gyllencreutz, Richard; Mangerud, Jan; Svendsen, John Inge

    2016-04-01

    Glacial geologists need ice sheet-scale chronological reconstructions of former ice extent to set individual records in a wider context and compare interpretations of ice sheet response to records of past environmental changes. Ice sheet modellers require empirical reconstructions on size and volume of past ice sheets that are fully documented, specified in time and include uncertainty estimates for model validation or constraints. Motivated by these demands, in 2005 we started a project (Database of the Eurasian Deglaciation, DATED) to compile and archive all published dates relevant to constraining the build-up and retreat of the last Eurasian ice sheets, including the British-Irish, Scandinavian and Svalbard-Barents-Kara Seas ice sheets (BIIS, SIS and SBKIS respectively). Over 5000 dates were assessed for reliability and used together with published ice-sheet margin positions to reconstruct time-slice maps of the ice sheets' extent, with uncertainty bounds, every 1000 years between 25-10 kyr ago and at four additional periods back to 40 kyr ago. Ten years after the idea for a database was conceived, the first version of results (DATED-1) has now been released (Hughes et al. 2016). We observe that: i) both the BIIS and SBKIS achieve maximum extent, and commence retreat earlier than the larger SIS; ii) the eastern terrestrial margin of the SIS reached its maximum extent up to 7000 years later than the westernmost marine margin; iii) the combined maximum ice volume (~24 m sea-level equivalent) was reached c. 21 ka; iv) large uncertainties exist; predominantly across marine sectors (e.g. the timing of coalescence and separation of the SIS and BKIS) but also in well-studied areas due to conflicting yet equally robust data. In just three years since the DATED-1 census (1 January 2013), the volume of new information (from both dates and mapped glacial geomorphology) has grown significantly (~1000 new dates). Here, we present the DATED-1 results in the context of the climatic changes of the last glacial, discuss the implications of emerging post-census data, and describe plans for the next version of the database, DATED-2. Hughes, A. L. C., Gyllencreutz, R., Lohne, Ø. S., Mangerud, J., Svendsen, J. I. 2016: The last Eurasian ice sheets - a chronological database and time-slice reconstruction, DATED-1. Boreas, 45, 1-45. 10.1111/bor.12142

  14. Validation and uncertainty analysis of a pre-treatment 2D dose prediction model

    NASA Astrophysics Data System (ADS)

    Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank

    2018-02-01

    Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.

  15. Connotative Meaning of Military Chat Communications

    DTIC Science & Technology

    2009-09-01

    humans recognize connotative cues expressing uncertainty, perception of personal threat, and urgency; formulate linguistic and non-linguistic means for...built a matrix of speech “cues” representative of uncertainty, perception of personal threat, and urgency, but also applied maximum entropy analysis...results. This project proposed to: (1) conduct a study of how humans recognize connotative cues expressing uncertainty, perception of personal

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Heng, E-mail: hengli@mdanderson.org; Zhu, X. Ronald; Zhang, Xiaodong

    Purpose: To develop and validate a novel delivery strategy for reducing the respiratory motion–induced dose uncertainty of spot-scanning proton therapy. Methods and Materials: The spot delivery sequence was optimized to reduce dose uncertainty. The effectiveness of the delivery sequence optimization was evaluated using measurements and patient simulation. One hundred ninety-one 2-dimensional measurements using different delivery sequences of a single-layer uniform pattern were obtained with a detector array on a 1-dimensional moving platform. Intensity modulated proton therapy plans were generated for 10 lung cancer patients, and dose uncertainties for different delivery sequences were evaluated by simulation. Results: Without delivery sequence optimization,more » the maximum absolute dose error can be up to 97.2% in a single measurement, whereas the optimized delivery sequence results in a maximum absolute dose error of ≤11.8%. In patient simulation, the optimized delivery sequence reduces the mean of fractional maximum absolute dose error compared with the regular delivery sequence by 3.3% to 10.6% (32.5-68.0% relative reduction) for different patients. Conclusions: Optimizing the delivery sequence can reduce dose uncertainty due to respiratory motion in spot-scanning proton therapy, assuming the 4-dimensional CT is a true representation of the patients' breathing patterns.« less

  17. Measurements for liquid rocket engine performance code verification

    NASA Technical Reports Server (NTRS)

    Praharaj, Sarat C.; Palko, Richard L.

    1986-01-01

    The goal of the rocket engine performance code verification tests is to obtain the I sub sp with an accuracy of 0.25% or less. This needs to be done during the sequence of four related tests (two reactive and two hot gas simulation) to best utilize the loss separation technique recommended in this study. In addition to I sub sp, the measurements of the input and output parameters for the codes are needed. This study has shown two things in regard to obtaining the I sub sp uncertainty within the 0.25% target. First, this target is generally not being realized at the present time, and second, the instrumentation and testing technology does exist to obtain this 0.25% uncertainty goal. However, to achieve this goal will require carefully planned, designed, and conducted testing. In addition, the test-stand (or system) dynamics must be evaluated in the pre-test and post-test phases of the design of the experiment and data analysis, respectively always keeping in mind that a .25% overall uncertainty in I sub sp is targeted. A table gives the maximum allowable uncertainty required for obtaining I sub sp with 0.25% uncertainty, the currently-quoted instrument specification, and present test uncertainty for the parameters. In general, it appears that measurement of the mass flow parameter within the required uncertainty may be the most difficult.

  18. Repeated decompositions reveal the stability of infomax decomposition of fMRI data

    PubMed Central

    Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott

    2010-01-01

    In this study, we decomposed 12 fMRI data sets from six subjects each 101 times using the infomax algorithm. The first decomposition was taken as a reference decomposition; the others were used to form a component matrix of 100 by 100 components. Equivalence relations between components in this matrix, defined as maximum spatial correlations to the components of the reference decomposition, were found by the Hungarian sorting method and used to form 100 equivalence classes for each data set. We then tested the reproducibility of the matched components in the equivalence classes using uncertainty measures based on component distributions, time courses, and ROC curves. Infomax ICA rarely failed to derive nearly the same components in different decompositions. Very few components per data set were poorly reproduced, even using vector angle uncertainty measures stricter than correlation and detection theory measures. PMID:17281453

  19. Image restoration, uncertainty, and information.

    PubMed

    Yu, F T

    1969-01-01

    Some of the physical interpretations about image restoration are discussed. From the theory of information the unrealizability of an inverse filter can be explained by degradation of information, which is due to distortion on the recorded image. The image restoration is a time and space problem, which can be recognized from the theory of relativity (the problem of image restoration is related to Heisenberg's uncertainty principle in quantum mechanics). A detailed discussion of the relationship between information and energy is given. Two general results may be stated: (1) the restoration of the image from the distorted signal is possible only if it satisfies the detectability condition. However, the restored image, at the best, can only approach to the maximum allowable time criterion. (2) The restoration of an image by superimposing the distorted signal (due to smearing) is a physically unrealizable method. However, this restoration procedure may be achieved by the expenditure of an infinite amount of energy.

  20. Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.

    PubMed

    Ćwik, Michał; Józefczyk, Jerzy

    2018-01-01

    An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.

  1. Maximum likelihood resampling of noisy, spatially correlated data

    NASA Astrophysics Data System (ADS)

    Goff, J.; Jenkins, C.

    2005-12-01

    In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application, which runs the risk of erasing high variability components of the field in addition to the noise components. We present here an alternative to filtering: a newly developed methodology for correcting noise in data by finding the "best" value given the data value, its uncertainty, and the data values and uncertainties at proximal locations. The motivating rationale is that data points that are close to each other in space cannot differ by "too much", where how much is "too much" is governed by the field correlation properties. Data with large uncertainties will frequently violate this condition, and in such cases need to be corrected, or "resampled." The best solution for resampling is determined by the maximum of the likelihood function defined by the intersection of two probability density functions (pdf): (1) the data pdf, with mean and variance determined by the data value and square uncertainty, respectively, and (2) the geostatistical pdf, whose mean and variance are determined by the kriging algorithm applied to proximal data values. A Monte Carlo sampling of the data probability space eliminates non-uniqueness, and weights the solution toward data values with lower uncertainties. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum likelihood resampling algorithm. The method is also applied to three marine geology/geophysics data examples: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is combination of both analytic (low uncertainty) and word-based (higher uncertainty) sources; and (3) sidescan backscatter data from the Martha's Vineyard Coastal Observatory which are, as is typical for such data, affected by speckly noise.

  2. Time-dependent seismic hazard analysis for the Greater Tehran and surrounding areas

    NASA Astrophysics Data System (ADS)

    Jalalalhosseini, Seyed Mostafa; Zafarani, Hamid; Zare, Mehdi

    2018-01-01

    This study presents a time-dependent approach for seismic hazard in Tehran and surrounding areas. Hazard is evaluated by combining background seismic activity, and larger earthquakes may emanate from fault segments. Using available historical and paleoseismological data or empirical relation, the recurrence time and maximum magnitude of characteristic earthquakes for the major faults have been explored. The Brownian passage time (BPT) distribution has been used to calculate equivalent fictitious seismicity rate for major faults in the region. To include ground motion uncertainty, a logic tree and five ground motion prediction equations have been selected based on their applicability in the region. Finally, hazard maps have been presented.

  3. Quantifying uncertainty in coral Sr/Ca-based SST estimates from Orbicella faveolata: A basis for multi-colony SST reconstructions

    NASA Astrophysics Data System (ADS)

    Richey, J. N.; Flannery, J. A.; Toth, L. T.; Kuffner, I. B.; Poore, R. Z.

    2017-12-01

    The Sr/Ca in massive corals can be used as a proxy for sea surface temperature (SST) in shallow tropical to sub-tropical regions; however, the relationship between Sr/Ca and SST varies throughout the ocean, between different species of coral, and often between different colonies of the same species. We aimed to quantify the uncertainty associated with the Sr/Ca-SST proxy due to sample handling (e.g., micro-drilling or analytical error), vital effects (e.g., among-colony differences in coral growth), and local-scale variability in microhabitat. We examine the intra- and inter-colony reproducibility of Sr/Ca records extracted from five modern Orbicella faveolata colonies growing in the Dry Tortugas, Florida, USA. The average intra-colony absolute difference (AD) in Sr/Ca of the five colonies during an overlapping interval (1997-2008) was 0.055 ± 0.044 mmol mol-1 (0.96 ºC) and the average inter-colony Sr/Ca AD was 0.039 ± 0.01 mmol mol-1 (0.51 ºC). All available Sr/Ca-SST data pairs from 1997-2008 were combined and regressed against the HadISST1 gridded SST data set (24 ºN and 82 ºW) to produce a calibration equation that could be applied to O. faveolata specimens from throughout the Gulf of Mexico/Caribbean/Atlantic region after accounting for the potential uncertainties in Sr/Ca-derived SSTs. We quantified a combined error term for O. faveolata using the root-sum-square (RMS) of the analytical, intra-, and inter-colony uncertainties and suggest that an overall uncertainty of 0.046 mmol mol-1 (0.81 ºC, 1σ), should be used to interpret Sr/Ca records from O. faveolata specimens of unknown age or origin to reconstruct SST. We also explored how uncertainty is affected by the number of corals used in a reconstruction by iteratively calculating the RMS error for composite coral time-series using two, three, four, and five overlapping coral colonies. Our results indicate that maximum RMS error at the 95% confidence interval on mean annual SST estimates is 1.4 ºC when a composite record is made from only two overlapping coral Sr/Ca records. The uncertainty decreases as additional coral Sr/Ca data are added, with a maximum RMS error of 0.5 ºC on mean annual SST for a five-colony composite. To reduce uncertainty to under 1 ºC, it is best to use Sr/Ca from three or more coral colonies from the same geographic location and time period.

  4. Estimation of Uncertainties in the Global Distance Test (GDT_TS) for CASP Models.

    PubMed

    Li, Wenlin; Schaeffer, R Dustin; Otwinowski, Zbyszek; Grishin, Nick V

    2016-01-01

    The Critical Assessment of techniques for protein Structure Prediction (or CASP) is a community-wide blind test experiment to reveal the best accomplishments of structure modeling. Assessors have been using the Global Distance Test (GDT_TS) measure to quantify prediction performance since CASP3 in 1998. However, identifying significant score differences between close models is difficult because of the lack of uncertainty estimations for this measure. Here, we utilized the atomic fluctuations caused by structure flexibility to estimate the uncertainty of GDT_TS scores. Structures determined by nuclear magnetic resonance are deposited as ensembles of alternative conformers that reflect the structural flexibility, whereas standard X-ray refinement produces the static structure averaged over time and space for the dynamic ensembles. To recapitulate the structural heterogeneous ensemble in the crystal lattice, we performed time-averaged refinement for X-ray datasets to generate structural ensembles for our GDT_TS uncertainty analysis. Using those generated ensembles, our study demonstrates that the time-averaged refinements produced structure ensembles with better agreement with the experimental datasets than the averaged X-ray structures with B-factors. The uncertainty of the GDT_TS scores, quantified by their standard deviations (SDs), increases for scores lower than 50 and 70, with maximum SDs of 0.3 and 1.23 for X-ray and NMR structures, respectively. We also applied our procedure to the high accuracy version of GDT-based score and produced similar results with slightly higher SDs. To facilitate score comparisons by the community, we developed a user-friendly web server that produces structure ensembles for NMR and X-ray structures and is accessible at http://prodata.swmed.edu/SEnCS. Our work helps to identify the significance of GDT_TS score differences, as well as to provide structure ensembles for estimating SDs of any scores.

  5. Estimation of submarine mass failure probability from a sequence of deposits with age dates

    USGS Publications Warehouse

    Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.

    2013-01-01

    The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.

  6. Methods and Tools for Evaluating Uncertainty in Ecological Models: A Survey

    EPA Science Inventory

    Poster presented at the Ecological Society of America Meeting. Ecologists are familiar with a variety of uncertainty techniques, particularly in the intersection of maximum likelihood parameter estimation and Monte Carlo analysis techniques, as well as a recent increase in Baye...

  7. Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution

    NASA Astrophysics Data System (ADS)

    Rajulapati, C. R.; Mujumdar, P. P.

    2017-12-01

    Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.

  8. TH-AB-BRA-04: Dosimetric Evaluation of MR-Guided HDR Brachytherapy Planning for Cervical Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamio, Y; Barkati, M; Beliveau-Nadeau, D

    2016-06-15

    Purpose: To perform a retrospective study on 16 patients that had both CT and T2-weighted MR scans done at first fraction using the Utrecht CT/MR applicator (Elekta Brachytherapy) in order to evaluate uncertainties associated with an MR-only planning workflow. Methods: MR-workflow uncertainties were classified in three categories: reconstruction, registration and contouring. A systematic comparison of the CT and MR contouring, manual reconstruction and optimization process was performed to evaluate the impact of these uncertainties on the recommended GEC ESTRO DVH parameters: D90% and V100% for HR-CTV as well as D2cc for bladder, rectum, sigmoid colon and small bowel. This comparisonmore » was done using the following four steps: 1. Catheter reconstruction done on MR images with original CT-plan contours and dwell times. 2. OAR contours adjusted on MR images with original CT-plan reconstruction and dwell times. 3. Both reconstruction and contours done on MR images with original CT-plan dwell times. 4. Entire MR-based workflow optimized dwell times reimported to the original CT-plan. Results: The MR-based reconstruction process showed average D2cc deviations of 4.5 ± 3.0%, 1.5 ± 2.0%, 2.5 ± 2.0% and 2.0 ± 1.0% for the bladder, rectum, sigmoid colon and small bowels respectively with a maximum of 10%, 6%, 6% and 4%. The HR-CTV’s D90% and V100% average deviations was found to be 4.0 ± 3.0%, and 2.0 ± 2.0% respectively with a maximum of 10% and 6%. Adjusting contours on MR-images was found to have a similar impact. Finally, the optimized MR-based workflow dwell times were found to still give acceptable plans when re-imported to the original CT-plan which validated the entire workflow. Conclusion: This work illustrates a systematic validation method for centers wanting to move towards an MR-only workflow. This work will be expanded to model based reconstruction, PD-weighted images and other types of applicators.« less

  9. A review of the generalized uncertainty principle.

    PubMed

    Tawfik, Abdel Nasser; Diab, Abdel Magied

    2015-12-01

    Based on string theory, black hole physics, doubly special relativity and some 'thought' experiments, minimal distance and/or maximum momentum are proposed. As alternatives to the generalized uncertainty principle (GUP), the modified dispersion relation, the space noncommutativity, the Lorentz invariance violation, and the quantum-gravity-induced birefringence effects are summarized. The origin of minimal measurable quantities and the different GUP approaches are reviewed and the corresponding observations are analysed. Bounds on the GUP parameter are discussed and implemented in the understanding of recent PLANCK observations of cosmic inflation. The higher-order GUP approaches predict minimal length uncertainty with and without maximum momenta. Possible arguments against the GUP are discussed; for instance, the concern about its compatibility with the equivalence principles, the universality of gravitational redshift and the free fall and law of reciprocal action are addressed.

  10. On the logistic equation subject to uncertainties in the environmental carrying capacity and initial population density

    NASA Astrophysics Data System (ADS)

    Dorini, F. A.; Cecconello, M. S.; Dorini, L. B.

    2016-04-01

    It is recognized that handling uncertainty is essential to obtain more reliable results in modeling and computer simulation. This paper aims to discuss the logistic equation subject to uncertainties in two parameters: the environmental carrying capacity, K, and the initial population density, N0. We first provide the closed-form results for the first probability density function of time-population density, N(t), and its inflection point, t*. We then use the Maximum Entropy Principle to determine both K and N0 density functions, treating such parameters as independent random variables and considering fluctuations of their values for a situation that commonly occurs in practice. Finally, closed-form results for the density functions and statistical moments of N(t), for a fixed t > 0, and of t* are provided, considering the uniform distribution case. We carried out numerical experiments to validate the theoretical results and compared them against that obtained using Monte Carlo simulation.

  11. Trajectory Dispersed Vehicle Process for Space Launch System

    NASA Technical Reports Server (NTRS)

    Statham, Tamara; Thompson, Seth

    2017-01-01

    The Space Launch System (SLS) vehicle is part of NASA's deep space exploration plans that includes manned missions to Mars. Manufacturing uncertainties in design parameters are key considerations throughout SLS development as they have significant effects on focus parameters such as lift-off-thrust-to-weight, vehicle payload, maximum dynamic pressure, and compression loads. This presentation discusses how the SLS program captures these uncertainties by utilizing a 3 degree of freedom (DOF) process called Trajectory Dispersed (TD) analysis. This analysis biases nominal trajectories to identify extremes in the design parameters for various potential SLS configurations and missions. This process utilizes a Design of Experiments (DOE) and response surface methodologies (RSM) to statistically sample uncertainties, and develop resulting vehicles using a Maximum Likelihood Estimate (MLE) process for targeting uncertainties bias. These vehicles represent various missions and configurations which are used as key inputs into a variety of analyses in the SLS design process, including 6 DOF dispersions, separation clearances, and engine out failure studies.

  12. Capturing total chronological and spatial uncertainties in palaeo-ice sheet reconstructions: the DATED example

    NASA Astrophysics Data System (ADS)

    Hughes, Anna; Gyllencreutz, Richard; Mangerud, Jan; Svendsen, John Inge

    2017-04-01

    Glacial geologists generate empirical reconstructions of former ice-sheet dynamics by combining evidence from the preserved record of glacial landforms (e.g. end moraines, lineations) and sediments with chronological evidence (mainly numerical dates derived predominantly from radiocarbon, exposure and luminescence techniques). However the geomorphological and sedimentological footprints and chronological data are both incomplete records in both space and time, and all have multiple types of uncertainty associated with them. To understand ice sheets' response to climate we need numerical models of ice-sheet dynamics based on physical principles. To test and/or constrain such models, empirical reconstructions of past ice sheets that capture and acknowledge all uncertainties are required. In 2005 we started a project (Database of the Eurasian Deglaciation, DATED) to produce an empirical reconstruction of the evolution of the last Eurasian ice sheets, (including the British-Irish, Scandinavian and Svalbard-Barents-Kara Seas ice sheets) that is fully documented, specified in time, and includes uncertainty estimates. Over 5000 dates relevant to constraining ice build-up and retreat were assessed for reliability and used together with published ice-sheet margin positions based on glacial geomorphology to reconstruct time-slice maps of the ice sheets' extent. The DATED maps show synchronous ice margins with maximum-minimum uncertainty bounds for every 1000 years between 25-10 kyr ago. In the first version of results (DATED-1; Hughes et al. 2016) all uncertainties (both quantitative and qualitative, e.g. precision and accuracy of numerical dates, correlation of moraines, stratigraphic interpretations) were combined based on our best glaciological-geological assessment and expressed in terms of distance as a 'fuzzy' margin. Large uncertainties (>100 km) exist; predominantly across marine sectors and other locations where there are spatial gaps in the dating record (e.g. the timing of coalescence and separation of the Scandinavian and Svalbard-Barents-Kara ice sheets) but also in well-studied areas due to conflicting yet apparently equally robust data. In the four years since the DATED-1 census (1 January 2013), the volume of new information (from both dates and mapped glacial geomorphology) has grown significantly ( 1000 new dates). Here, we present work towards the updated version of results, DATED-2, that attempts to further reduce and explicitly report all uncertainties inherent in ice sheet reconstructions. Hughes, A. L. C., Gyllencreutz, R., Lohne, Ø. S., Mangerud, J., Svendsen, J. I. 2016: The last Eurasian ice sheets - a chronological database and time-slice reconstruction, DATED-1. Boreas, 45, 1-45. 10.1111/bor.12142

  13. Effect of Fault Parameter Uncertainties on PSHA explored by Monte Carlo Simulations: A case study for southern Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Akinci, A.; Pace, B.

    2017-12-01

    In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.

  14. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    PubMed

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  15. Core power and decay time limits for a disabled LOFT ECCS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atkinson, S.A.

    1978-01-09

    An analysis was done to determine at what LOFT total core power (nuclear plus decay power) the ECCS could be inoperable. The criteria used for the analysis was that the maximum fuel clad temperature should not exceed 1650/sup 0/F given a loss of coolant. Calculations for natural convection cooling of the fuel by air with an inlet temperature of 580/sup 0/F determined that the limiting core power is 25 kW (discounted by 15 percent to 20 percent for potential uncertainties). Shutdown times are listed for when the LOFT ECCS can be safely bypassed or disabled.

  16. Uncertainty analysis of thermocouple measurements used in normal and abnormal thermal environment experiments at Sandia's Radiant Heat Facility and Lurance Canyon Burn Site.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakos, James Thomas

    2004-04-01

    It would not be possible to confidently qualify weapon systems performance or validate computer codes without knowing the uncertainty of the experimental data used. This report provides uncertainty estimates associated with thermocouple data for temperature measurements from two of Sandia's large-scale thermal facilities. These two facilities (the Radiant Heat Facility (RHF) and the Lurance Canyon Burn Site (LCBS)) routinely gather data from normal and abnormal thermal environment experiments. They are managed by Fire Science & Technology Department 09132. Uncertainty analyses were performed for several thermocouple (TC) data acquisition systems (DASs) used at the RHF and LCBS. These analyses apply tomore » Type K, chromel-alumel thermocouples of various types: fiberglass sheathed TC wire, mineral-insulated, metal-sheathed (MIMS) TC assemblies, and are easily extended to other TC materials (e.g., copper-constantan). Several DASs were analyzed: (1) A Hewlett-Packard (HP) 3852A system, and (2) several National Instrument (NI) systems. The uncertainty analyses were performed on the entire system from the TC to the DAS output file. Uncertainty sources include TC mounting errors, ANSI standard calibration uncertainty for Type K TC wire, potential errors due to temperature gradients inside connectors, extension wire uncertainty, DAS hardware uncertainties including noise, common mode rejection ratio, digital voltmeter accuracy, mV to temperature conversion, analog to digital conversion, and other possible sources. Typical results for 'normal' environments (e.g., maximum of 300-400 K) showed the total uncertainty to be about {+-}1% of the reading in absolute temperature. In high temperature or high heat flux ('abnormal') thermal environments, total uncertainties range up to {+-}2-3% of the reading (maximum of 1300 K). The higher uncertainties in abnormal thermal environments are caused by increased errors due to the effects of imperfect TC attachment to the test item. 'Best practices' are provided in Section 9 to help the user to obtain the best measurements possible.« less

  17. Application of Nonlinear Seismic Soil-Structure Interaction Analysis for Identification of Seismic Margins at Nuclear Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varma, Amit H.; Seo, Jungil; Coleman, Justin Leigh

    2015-11-01

    Seismic probabilistic risk assessment (SPRA) methods and approaches at nuclear power plants (NPP) were first developed in the 1970s and aspects of them have matured over time as they were applied and incrementally improved. SPRA provides information on risk and risk insights and allows for some accounting for uncertainty and variability. As a result, SPRA is now used as an important basis for risk-informed decision making for both new and operating NPPs in the US and in an increasing number of countries globally. SPRAs are intended to provide best estimates of the various combinations of structural and equipment failures thatmore » can lead to a seismic induced core damage event. However, in some instances the current SPRA approach contains large uncertainties, and potentially masks other important events (for instance, it was not the seismic motions that caused the Fukushima core melt events, but the tsunami ingress into the facility). INL has an advanced SPRA research and development (R&D) activity that will identify areas in the calculation process that contain significant uncertainties. One current area of focus is the use of nonlinear soil-structure interaction (NLSSI) analysis methods to accurately capture: 1) nonlinear soil behavior and 2) gapping and sliding between the NPP and soil. The goal of this study is to compare numerical NLSSI analysis results with recorded earthquake ground motions at Fukushima Daichii (Great Tohuku Earthquake) and evaluate the sources of nonlinearity contributing to the observed reduction in peak acceleration. Comparisons are made using recorded data in the free-field (soil column with no structural influence) and recorded data on the NPP basemat (in-structure response). Results presented in this study should identify areas of focus for future R&D activities with the goal of minimizing uncertainty in SPRA calculations. This is not a validation activity since there are too many sources of uncertainty that a numerical analysis would need to consider (variability in soil material properties, structural material properties, etc.). Rather the report will determine if the NLSSI calculations are following similar trends observed in the recorded data (i.e. reductions in maximum acceleration between the free-field and basemat) Numerical NLSSI results presented show maximum accelerations between the free field and basemat were reduced the EW and NS directions. The maximum acceleration in the UD direction increased slightly. The largest reduction in maximum accelerations between the modeled free-field and the NPP basemat resulted in nearly 50% reduction. The observation in reduction of numerical maximum accelerations in the EW and NS directions follows the observed trend in the recorded data. The maximum reductions observed in these NLSSI studies were due to soil nonlinearities, not gapping and sliding (although additional R&D is needed to develop an appropriate approach to model gapping and sliding). This exploratory study highlights the need for additional R&D on developing: (i) improved modeling of soil nonlinearities (soil constitutive models that appropriately capture cyclic soil behavior), (ii) improved modeling of gapping and sliding at the soil-structure interface (to appropriately capture the dissipation of energy at this interface), and (iii) experimental laboratory test data to calibrate the items (i) and (ii).« less

  18. A stochastic method to characterize model uncertainty for a Nutrient TMDL

    USDA-ARS?s Scientific Manuscript database

    The U.S. EPA’s Total Maximum Daily Load (TMDL) program has encountered resistances in its implementation partly because of its strong dependence on mathematical models to set limitations on the release of impairing substances. The uncertainty associated with predictions of such models is often not s...

  19. A technique to minimize uncertainties in load duration curves (LDCs) for water quality-impaired ungauged sites

    EPA Science Inventory

    For many water quality-impaired stream segments, streamflow and water quality monitoring sites are not available. Lack of available streamflow data at impaired ungauged sites leads to uncertainties in total maximum daily load (TMDL) estimation. We developed a technique to minimiz...

  20. Diversity Dynamics in Nymphalidae Butterflies: Effect of Phylogenetic Uncertainty on Diversification Rate Shift Estimates

    PubMed Central

    Peña, Carlos; Espeland, Marianne

    2015-01-01

    The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution. PMID:25830910

  1. Diversity dynamics in Nymphalidae butterflies: effect of phylogenetic uncertainty on diversification rate shift estimates.

    PubMed

    Peña, Carlos; Espeland, Marianne

    2015-01-01

    The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution.

  2. Incorporating uncertainty in RADTRAN 6.0 input files.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennis, Matthew L.; Weiner, Ruth F.; Heames, Terence John

    Uncertainty may be introduced into RADTRAN analyses by distributing input parameters. The MELCOR Uncertainty Engine (Gauntt and Erickson, 2004) has been adapted for use in RADTRAN to determine the parameter shape and minimum and maximum of the distribution, to sample on the distribution, and to create an appropriate RADTRAN batch file. Coupling input parameters is not possible in this initial application. It is recommended that the analyst be very familiar with RADTRAN and able to edit or create a RADTRAN input file using a text editor before implementing the RADTRAN Uncertainty Analysis Module. Installation of the MELCOR Uncertainty Engine ismore » required for incorporation of uncertainty into RADTRAN. Gauntt and Erickson (2004) provides installation instructions as well as a description and user guide for the uncertainty engine.« less

  3. Optimizing Monitoring Designs under Alternative Objectives

    DOE PAGES

    Gastelum, Jason A.; USA, Richland Washington; Porter, Ellen A.; ...

    2014-12-31

    This paper describes an approach to identify monitoring designs that optimize detection of CO2 leakage from a carbon capture and sequestration (CCS) reservoir and compares the results generated under two alternative objective functions. The first objective function minimizes the expected time to first detection of CO2 leakage, the second more conservative objective function minimizes the maximum time to leakage detection across the set of realizations. The approach applies a simulated annealing algorithm that searches the solution space by iteratively mutating the incumbent monitoring design. The approach takes into account uncertainty by evaluating the performance of potential monitoring designs across amore » set of simulated leakage realizations. The approach relies on a flexible two-tiered signature to infer that CO2 leakage has occurred. This research is part of the National Risk Assessment Partnership, a U.S. Department of Energy (DOE) project tasked with conducting risk and uncertainty analysis in the areas of reservoir performance, natural leakage pathways, wellbore integrity, groundwater protection, monitoring, and systems level modeling.« less

  4. SU-E-J-159: Analysis of Total Imaging Uncertainty in Respiratory-Gated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, J; Okuda, T; Sakaino, S

    Purpose: In respiratory-gated radiotherapy, the gating phase during treatment delivery needs to coincide with the corresponding phase determined during the treatment plan. However, because radiotherapy is performed based on the image obtained for the treatment plan, the time delay, motion artifact, volume effect, and resolution in the images are uncertain. Thus, imaging uncertainty is the most basic factor that affects the localization accuracy. Therefore, these uncertainties should be analyzed. This study aims to analyze the total imaging uncertainty in respiratory-gated radiotherapy. Methods: Two factors of imaging uncertainties related to respiratory-gated radiotherapy were analyzed. First, CT image was used to determinemore » the target volume and 4D treatment planning for the Varian Realtime Position Management (RPM) system. Second, an X-ray image was acquired for image-guided radiotherapy (IGRT) for the BrainLAB ExacTrac system. These factors were measured using a respiratory gating phantom. The conditions applied during phantom operation were as follows: respiratory wave form, sine curve; respiratory cycle, 4 s; phantom target motion amplitude, 10, 20, and 29 mm (which is maximum phantom longitudinal motion). The target and cylindrical marker implanted in the phantom coverage of the CT images was measured and compared with the theoretically calculated coverage from the phantom motion. The theoretical position of the cylindrical marker implanted in the phantom was compared with that acquired from the X-ray image. The total imaging uncertainty was analyzed from these two factors. Results: In the CT image, the uncertainty between the target and cylindrical marker’s actual coverage and the coverage of CT images was 1.19 mm and 2.50mm, respectively. In the Xray image, the uncertainty was 0.39 mm. The total imaging uncertainty from the two factors was 1.62mm. Conclusion: The total imaging uncertainty in respiratory-gated radiotherapy was clinically acceptable. However, an internal margin should be added to account for the total imaging uncertainty.« less

  5. Probabilistic description of probable maximum precipitation

    NASA Astrophysics Data System (ADS)

    Ben Alaya, Mohamed Ali; Zwiers, Francis W.; Zhang, Xuebin

    2017-04-01

    Probable Maximum Precipitation (PMP) is the key parameter used to estimate probable Maximum Flood (PMF). PMP and PMF are important for dam safety and civil engineering purposes. Even if the current knowledge of storm mechanisms remains insufficient to properly evaluate limiting values of extreme precipitation, PMP estimation methods are still based on deterministic consideration, and give only single values. This study aims to provide a probabilistic description of the PMP based on the commonly used method, the so-called moisture maximization. To this end, a probabilistic bivariate extreme values model is proposed to address the limitations of traditional PMP estimates via moisture maximization namely: (i) the inability to evaluate uncertainty and to provide a range PMP values, (ii) the interpretation that a maximum of a data series as a physical upper limit (iii) and the assumption that a PMP event has maximum moisture availability. Results from simulation outputs of the Canadian Regional Climate Model CanRCM4 over North America reveal the high uncertainties inherent in PMP estimates and the non-validity of the assumption that PMP events have maximum moisture availability. This later assumption leads to overestimation of the PMP by an average of about 15% over North America, which may have serious implications for engineering design.

  6. Maximum a posteriori resampling of noisy, spatially correlated data

    NASA Astrophysics Data System (ADS)

    Goff, John A.; Jenkins, Chris; Calder, Brian

    2006-08-01

    In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application. We present here an alternative to filtering: a newly developed method for correcting noise in data by finding the "best" value given available information. The motivating rationale is that data points that are close to each other in space cannot differ by "too much," where "too much" is governed by the field covariance. Data with large uncertainties will frequently violate this condition and therefore ought to be corrected, or "resampled." Our solution for resampling is determined by the maximum of the a posteriori density function defined by the intersection of (1) the data error probability density function (pdf) and (2) the conditional pdf, determined by the geostatistical kriging algorithm applied to proximal data values. A maximum a posteriori solution can be computed sequentially going through all the data, but the solution depends on the order in which the data are examined. We approximate the global a posteriori solution by randomizing this order and taking the average. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum a posteriori resampling algorithm. The method is also applied to three marine geology/geophysics data examples, demonstrating the viability of the method for diverse applications: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is a combination of both analytic (low uncertainty) and word-based (higher uncertainty) sources; and (3) side-scan backscatter data from the Martha's Vineyard Coastal Observatory which are, as is typical for such data, affected by speckle noise. Compared to filtering, maximum a posteriori resampling provides an objective and optimal method for reducing noise, and better preservation of the statistical properties of the sampled field. The primary disadvantage is that maximum a posteriori resampling is a computationally expensive procedure.

  7. Cross-Sectional And Longitudinal Uncertainty Propagation In Drinking Water Risk Assessment

    NASA Astrophysics Data System (ADS)

    Tesfamichael, A. A.; Jagath, K. J.

    2004-12-01

    Pesticide residues in drinking water can vary significantly from day to day. However, drinking water quality monitoring performed under the Safe Drinking Water Act (SDWA) at most community water systems (CWSs) is typically limited to four data points per year over a few years. Due to limited sampling, likely maximum residues may be underestimated in risk assessment. In this work, a statistical methodology is proposed to study the cross-sectional and longitudinal uncertainties in observed samples and their propagated effect in risk estimates. The methodology will be demonstrated using data from 16 CWSs across the US that have three independent databases of atrazine residue to estimate the uncertainty of risk in infants and children. The results showed that in 85% of the CWSs, chronic risks predicted with the proposed approach may be two- to four-folds higher than that predicted with the current approach, while intermediate risks may be two- to three-folds higher in 50% of the CWSs. In 12% of the CWSs, however, the proposed methodology showed a lower intermediate risk. A closed-form solution of propagated uncertainty will be developed to calculate the number of years (seasons) of water quality data and sampling frequency needed to reduce the uncertainty in risk estimates. In general, this methodology provided good insight into the importance of addressing uncertainty of observed water quality data and the need to predict likely maximum residues in risk assessment by considering propagation of uncertainties.

  8. Monte-Carlo based Uncertainty Analysis For CO2 Laser Microchanneling Model

    NASA Astrophysics Data System (ADS)

    Prakash, Shashi; Kumar, Nitish; Kumar, Subrata

    2016-09-01

    CO2 laser microchanneling has emerged as a potential technique for the fabrication of microfluidic devices on PMMA (Poly-methyl-meth-acrylate). PMMA directly vaporizes when subjected to high intensity focused CO2 laser beam. This process results in clean cut and acceptable surface finish on microchannel walls. Overall, CO2 laser microchanneling process is cost effective and easy to implement. While fabricating microchannels on PMMA using a CO2 laser, the maximum depth of the fabricated microchannel is the key feature. There are few analytical models available to predict the maximum depth of the microchannels and cut channel profile on PMMA substrate using a CO2 laser. These models depend upon the values of thermophysical properties of PMMA and laser beam parameters. There are a number of variants of transparent PMMA available in the market with different values of thermophysical properties. Therefore, for applying such analytical models, the values of these thermophysical properties are required to be known exactly. Although, the values of laser beam parameters are readily available, extensive experiments are required to be conducted to determine the value of thermophysical properties of PMMA. The unavailability of exact values of these property parameters restrict the proper control over the microchannel dimension for given power and scanning speed of the laser beam. In order to have dimensional control over the maximum depth of fabricated microchannels, it is necessary to have an idea of uncertainty associated with the predicted microchannel depth. In this research work, the uncertainty associated with the maximum depth dimension has been determined using Monte Carlo method (MCM). The propagation of uncertainty with different power and scanning speed has been predicted. The relative impact of each thermophysical property has been determined using sensitivity analysis.

  9. Uncertainties in Estimates of the Risks of Late Effects from Space Radiation

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Schimmerling, W.; Wilson, J. W.; Peterson, L. E.; Saganti, P.; Dicelli, J. F.

    2002-01-01

    The health risks faced by astronauts from space radiation include cancer, cataracts, hereditary effects, and non-cancer morbidity and mortality risks related to the diseases of the old age. Methods used to project risks in low-Earth orbit are of questionable merit for exploration missions because of the limited radiobiology data and knowledge of galactic cosmic ray (GCR) heavy ions, which causes estimates of the risk of late effects to be highly uncertain. Risk projections involve a product of many biological and physical factors, each of which has a differential range of uncertainty due to lack of data and knowledge. Within the linear-additivity model, we use Monte-Carlo sampling from subjective uncertainty distributions in each factor to obtain a Maximum Likelihood estimate of the overall uncertainty in risk projections. The resulting methodology is applied to several human space exploration mission scenarios including ISS, lunar station, deep space outpost, and Mar's missions of duration of 360, 660, and 1000 days. The major results are the quantification of the uncertainties in current risk estimates, the identification of factors that dominate risk projection uncertainties, and the development of a method to quantify candidate approaches to reduce uncertainties or mitigate risks. The large uncertainties in GCR risk projections lead to probability distributions of risk that mask any potential risk reduction using the "optimization" of shielding materials or configurations. In contrast, the design of shielding optimization approaches for solar particle events and trapped protons can be made at this time, and promising technologies can be shown to have merit using our approach. The methods used also make it possible to express risk management objectives in terms of quantitative objective's, i.e., the number of days in space without exceeding a given risk level within well defined confidence limits.

  10. Uncertainties in Projecting Risks of Late Effects from Space Radiation

    NASA Astrophysics Data System (ADS)

    Cucinotta, F.; Schimmerling, W.; Peterson, L.; Wilson, J.; Saganti, P.; Dicello, J.

    The health risks faced by astronauts from space radiation include cancer, cataracts, hereditary effects, CNS risks, and non - cancer morbidity and mortality risks related to the diseases of the old age. Methods used to project risks in low -Earth orbit are of questionable merit for exploration missions because of the limited radiobiology data and knowledge of galactic cosmic ray (GCR) heavy ions, which causes estimates of the risk of late effects to be highly uncertain. Risk projections involve a product of many biological and physical factors, each of which has a differential range of uncertainty due to lack of data and knowledge. Within the linear-additivity model, we use Monte-Carlo sampling from subjective uncertainty distributions in each factor to obtain a maximum likelihood estimate of the overall uncertainty in risk projections. The resulting methodology is applied to several human space exploration mission scenarios including ISS, lunar station, deep space outpost, and Mar's missions of duration of 360, 660, and 1000 days. The major results are the quantification of the uncertainties in current risk estimates, the identification of the primary factors that dominate risk projection uncertainties, and the development of a method to quantify candidate approaches to reduce uncertainties or mitigate risks. The large uncertainties in GCR risk projections lead to probability distributions of risk that mask any potential risk reduction using the "optimization" of shielding materials or configurations. In contrast, the design of shielding optimization approaches for solar particle events and trapped protons can be made at this time, and promising technologies can be shown to have merit using our approach. The methods used also make it possible to express risk management objectives in terms of quantitative objectives, i.e., number of days in space without exceeding a given risk level within well defined confidence limits

  11. Interval Estimation of Seismic Hazard Parameters

    NASA Astrophysics Data System (ADS)

    Orlecka-Sikora, Beata; Lasocki, Stanislaw

    2017-03-01

    The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.

  12. Assessment of a three‐dimensional (3D) water scanning system for beam commissioning and measurements on a helical tomotherapy unit

    PubMed Central

    Ashenafi, Michael S.; McDonald, Daniel G.; Vanek, Kenneth N.

    2015-01-01

    Beam scanning data collected on the tomotherapy linear accelerator using the TomoScanner water scanning system is primarily used to verify the golden beam profiles included in all Helical TomoTherapy treatment planning systems (TOMO TPSs). The user is not allowed to modify the beam profiles/parameters for beam modeling within the TOMO TPSs. The authors report the first feasibility study using the Blue Phantom Helix (BPH) as an alternative to the TomoScanner (TS) system. This work establishes a benchmark dataset using BPH for target commissioning and quality assurance (QA), and quantifies systematic uncertainties between TS and BPH. Reproducibility of scanning with BPH was tested by three experienced physicists taking five sets of measurements over a six‐month period. BPH provides several enhancements over TS, including a 3D scanning arm, which is able to acquire necessary beam‐data with one tank setup, a universal chamber mount, and the OmniPro software, which allows online data collection and analysis. Discrepancies between BPH and TS were estimated by acquiring datasets with each tank. In addition, data measured with BPH and TS was compared to the golden TOMO TPS beam data. The total systematic uncertainty, defined as the combination of scanning system and beam modeling uncertainties, was determined through numerical analysis and tabulated. OmniPro was used for all analysis to eliminate uncertainty due to different data processing algorithms. The setup reproducibility of BPH remained within 0.5 mm/0.5%. Comparing BPH, TS, and Golden TPS for PDDs beyond maximum depth, the total systematic uncertainties were within 1.4 mm/2.1%. Between BPH and TPS golden data, maximum differences in the field width and penumbra of in‐plane profiles were within 0.8 and 1.1 mm, respectively. Furthermore, in cross‐plane profiles, the field width differences increased at depth greater than 10 cm up to 2.5 mm, and maximum penumbra uncertainties were 5.6 mm and 4.6 mm from TS scanning system and TPS modeling, respectively. Use of BPH reduced measurement time by 1–2 hrs per session. The BPH has been assessed as an efficient, reproducible, and accurate scanning system capable of providing a reliable benchmark beam data. With this data, a physicist can utilize the BPH in a clinical setting with an understanding of the scan discrepancy that may be encountered while validating the TPS or during routine machine QA. Without the flexibility of modifying the TPS and without a golden beam dataset from the vendor or a TPS model generated from data collected with the BPH, this represents the best solution for current clinical use of the BPH. PACS number: 87.56.Fc

  13. Algorithm for pose estimation based on objective function with uncertainty-weighted measuring error of feature point cling to the curved surface.

    PubMed

    Huo, Ju; Zhang, Guiyang; Yang, Ming

    2018-04-20

    This paper is concerned with the anisotropic and non-identical gray distribution of feature points clinging to the curved surface, upon which a high precision and uncertainty-resistance algorithm for pose estimation is proposed. Weighted contribution of uncertainty to the objective function of feature points measuring error is analyzed. Then a novel error objective function based on the spatial collinear error is constructed by transforming the uncertainty into a covariance-weighted matrix, which is suitable for the practical applications. Further, the optimized generalized orthogonal iterative (GOI) algorithm is utilized for iterative solutions such that it avoids the poor convergence and significantly resists the uncertainty. Hence, the optimized GOI algorithm extends the field-of-view applications and improves the accuracy and robustness of the measuring results by the redundant information. Finally, simulation and practical experiments show that the maximum error of re-projection image coordinates of the target is less than 0.110 pixels. Within the space 3000  mm×3000  mm×4000  mm, the maximum estimation errors of static and dynamic measurement for rocket nozzle motion are superior to 0.065° and 0.128°, respectively. Results verify the high accuracy and uncertainty attenuation performance of the proposed approach and should therefore have potential for engineering applications.

  14. Linking trading ratio with TMDL (total maximum daily load) allocation matrix and uncertainty analysis.

    PubMed

    Zhang, H X

    2008-01-01

    An innovative approach for total maximum daily load (TMDL) allocation and implementation is the watershed-based pollutant trading. Given the inherent scientific uncertainty for the tradeoffs between point and nonpoint sources, setting of trading ratios can be a contentious issue and was already listed as an obstacle by several pollutant trading programs. One of the fundamental reasons that a trading ratio is often set higher (e.g. greater than 2) is to allow for uncertainty in the level of control needed to attain water quality standards, and to provide a buffer in case traded reductions are less effective than expected. However, most of the available studies did not provide an approach to explicitly address the determination of trading ratio. Uncertainty analysis has rarely been linked to determination of trading ratio.This paper presents a practical methodology in estimating "equivalent trading ratio (ETR)" and links uncertainty analysis with trading ratio determination from TMDL allocation process. Determination of ETR can provide a preliminary evaluation of "tradeoffs" between various combination of point and nonpoint source control strategies on ambient water quality improvement. A greater portion of NPS load reduction in overall TMDL load reduction generally correlates with greater uncertainty and thus requires greater trading ratio. The rigorous quantification of trading ratio will enhance the scientific basis and thus public perception for more informed decision in overall watershed-based pollutant trading program. (c) IWA Publishing 2008.

  15. Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation

    ERIC Educational Resources Information Center

    Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting

    2011-01-01

    Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…

  16. Wind extremes in the North Sea basin under climate change: an ensemble study of 12 CMIP5 GCMs

    NASA Astrophysics Data System (ADS)

    de Winter, R.; Ruessink, G.; Sterl, A.

    2012-12-01

    Coastal safety may be influenced by climate change, as changes in extreme surge levels and wave extremes may increase the vulnerability of dunes and other coastal defenses. In the North Sea, an area already prone to severe flooding, these high surge levels and waves are generated by severe wind speeds during storm events. As a result of the geometry of the North Sea, not only the maximum wind speed is relevant, but also wind direction. Analyzing changes in a changing climate implies that several uncertainties need to be taken into account. First, there is the uncertainty in climate experiments, which represents the possible development of the emission of greenhouse gases. Second, there is uncertainty between the climate models that are used to analyze the effect of different climate experiments. The third uncertainty is the natural variability of the climate. When this system variability is large, small trends will be difficult to detect. The natural variability results in statistical uncertainty, especially for events with high return values. We addressed the first two types of uncertainties for extreme wind conditions in the North Sea using 12 CMIP5 GCMs. To evaluate the differences between the climate experiments, two climate experiments (rcp4.5 and rcp8.5) from 2050-2100 are compared with historical runs, running from 1950-2000. Rcp4.5 is considered to be a middle climate experiment and rcp8.5 represents high-end climate scenarios. The projections of the 12 GCMs for a given scenario illustrate model uncertainty. We focus on the North Sea basin, because changes in wind conditions could have a large impact on safety of the densely populated North Sea coast, an area that has already a high exposure to flooding. Our results show that, consistent with ERA-Interim results, the annual maximum wind speed in the historical run demonstrates large interannual variability. For the North Sea, the annual maximum wind speed is not projected to change in either rcp4.5 or rcp8.5. In fact, the differences in the 12 GCMs are larger than the difference between the three experiments. Furthermore, our results show that, the variation in direction of annual maximum wind speed is large and this precludes a firm statement on climate-change induced changes in these directions. Nonetheless, most models indicate a decrease in annual maximum wind speed from south-eastern directions and an increase from south-western and western directions. This might be caused by a poleward shift of the storm track. The amount of wind from north-west and north-north-west, wind directions that are responsible for the development of extreme storm surges in the southern part of the North Sea, are not projected to change. However, North Sea coasts that have the longest fetch for western direction, e.g. the German Bight, may encounter more often high storm surge levels and extreme waves when the annual maximum wind will indeed be more often from western direction.

  17. Ensemble hydrological forecast efficiency evolution over various issue dates and lead-time: case study for the Cheboksary reservoir (Volga River)

    NASA Astrophysics Data System (ADS)

    Gelfan, Alexander; Moreido, Vsevolod

    2017-04-01

    Ensemble hydrological forecasting allows for describing uncertainty caused by variability of meteorological conditions in the river basin for the forecast lead-time. At the same time, in snowmelt-dependent river basins another significant source of uncertainty relates to variability of initial conditions of the basin (snow water equivalent, soil moisture content, etc.) prior to forecast issue. Accurate long-term hydrological forecast is most crucial for large water management systems, such as the Cheboksary reservoir (the catchment area is 374 000 sq.km) located in the Middle Volga river in Russia. Accurate forecasts of water inflow volume, maximum discharge and other flow characteristics are of great value for this basin, especially before the beginning of the spring freshet season that lasts here from April to June. The semi-distributed hydrological model ECOMAG was used to develop long-term ensemble forecast of daily water inflow into the Cheboksary reservoir. To describe variability of the meteorological conditions and construct ensemble of possible weather scenarios for the lead-time of the forecast, two approaches were applied. The first one utilizes 50 weather scenarios observed in the previous years (similar to the ensemble streamflow prediction (ESP) procedure), the second one uses 1000 synthetic scenarios simulated by a stochastic weather generator. We investigated the evolution of forecast uncertainty reduction, expressed as forecast efficiency, over various consequent forecast issue dates and lead time. We analyzed the Nash-Sutcliffe efficiency of inflow hindcasts for the period 1982 to 2016 starting from 1st of March with 15 days frequency for lead-time of 1 to 6 months. This resulted in the forecast efficiency matrix with issue dates versus lead-time that allows for predictability identification of the basin. The matrix was constructed separately for observed and synthetic weather ensembles.

  18. pplacer: linear time maximum-likelihood and Bayesian phylogenetic placement of sequences onto a fixed reference tree

    PubMed Central

    2010-01-01

    Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service. PMID:21034504

  19. Impact of uncertainty on cost-effectiveness analysis of medical strategies: the case of high-dose chemotherapy for breast cancer patients.

    PubMed

    Marino, Patricia; Siani, Carole; Roché, Henri; Moatti, Jean-Paul

    2005-01-01

    The object of this study was to determine, taking into account uncertainty on cost and outcome parameters, the cost-effectiveness of high-dose chemotherapy (HDC) compared with conventional chemotherapy for advanced breast cancer patients. An analysis was conducted for 300 patients included in a randomized clinical trial designed to evaluate the benefits, in terms of disease-free survival and overall survival, of adding a single course of HDC to a four-cycle conventional-dose chemotherapy for breast cancer patients with axillary lymph node invasion. Costs were estimated from a detailed observation of physical quantities consumed, and the Kaplan-Meier method was used to evaluate mean survival times. Incremental cost-effectiveness ratios were evaluated successively considering disease-free survival and overall survival outcomes. Handling of uncertainty consisted in construction of confidence intervals for these ratios, using the truncated Fieller method. The cost per disease-free life year gained was evaluated at 13,074 Euros, a value that seems to be acceptable to society. However, handling uncertainty shows that the upper bound of the confidence interval is around 38,000 Euros, which is nearly three times higher. Moreover, as no difference was demonstrated in overall survival between treatments, cost-effectiveness analysis, that is a cost minimization, indicated that the intensive treatment is a dominated strategy involving an extra cost of 7,400 Euros, for no added benefit. Adding a single course of HDC led to a clinical benefit in terms of disease-free survival for an additional cost that seems to be acceptable, considering the point estimate of the ratio. However, handling uncertainty indicates a maximum ratio for which conclusions have to be discussed.

  20. SU-E-T-493: Analysis of the Impact of Range and Setup Uncertainties On the Dose to Brain Stem and Whole Brain in the Passively Scattered Proton Therapy Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sahoo, N; Zhu, X; Zhang, X

    Purpose: To quantify the impact of range and setup uncertainties on various dosimetric indices that are used to assess normal tissue toxicities of patients receiving passive scattering proton beam therapy (PSPBT). Methods: Robust analysis of sample treatment plans of six brain cancer patients treated with PSPBT at our facility for whom the maximum brain stem dose exceeded 5800 CcGE were performed. The DVH of each plan was calculated in an Eclipse treatment planning system (TPS) version 11 applying ±3.5% range uncertainty and ±3 mm shift of the isocenter in x, y and z directions to account for setup uncertainties. Worst-casemore » dose indices for brain stem and whole brain were compared to their values in the nominal plan to determine the average change in their values. For the brain stem, maximum dose to 1 cc of volume, dose to 10%, 50%, 90% of volume (D10, D50, D90) and volume receiving 6000, 5400, 5000, 4500, 4000 CcGE (V60, V54, V50, V45, V40) were evaluated. For the whole brain, maximum dose to 1 cc of volume, and volume receiving 5400, 5000, 4500, 4000, 3000 CcGE (V54, V50, V45, V40 and V30) were assessed. Results: The average change in the values of these indices in the worst scenario cases from the nominal plan were as follows. Brain stem; Maximum dose to 1 cc of volume: 1.1%, D10: 1.4%, D50: 8.0%, D90:73.3%, V60:116.9%, V54:27.7%, V50: 21.2%, V45:16.2%, V40:13.6%,Whole brain; Maximum dose to 1 cc of volume: 0.3%, V54:11.4%, V50: 13.0%, V45:13.6%, V40:14.1%, V30:13.5%. Conclusion: Large to modest changes in the dosiemtric indices for brain stem and whole brain compared to nominal plan due to range and set up uncertainties were observed. Such potential changes should be taken into account while using any dosimetric parameters for outcome evaluation of patients receiving proton therapy.« less

  1. Modeling Freedom From Progression for Standard-Risk Medulloblastoma: A Mathematical Tumor Control Model With Multiple Modes of Failure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodin, N. Patrik, E-mail: nils.patrik.brodin@rh.dk; Niels Bohr Institute, University of Copenhagen, Copenhagen; Vogelius, Ivan R.

    2013-10-01

    Purpose: As pediatric medulloblastoma (MB) is a relatively rare disease, it is important to extract the maximum information from trials and cohort studies. Here, a framework was developed for modeling tumor control with multiple modes of failure and time-to-progression for standard-risk MB, using published pattern of failure data. Methods and Materials: Outcome data for standard-risk MB published after 1990 with pattern of relapse information were used to fit a tumor control dose-response model addressing failures in both the high-dose boost volume and the elective craniospinal volume. Estimates of 5-year event-free survival from 2 large randomized MB trials were used tomore » model the time-to-progression distribution. Uncertainty in freedom from progression (FFP) was estimated by Monte Carlo sampling over the statistical uncertainty in input data. Results: The estimated 5-year FFP (95% confidence intervals [CI]) for craniospinal doses of 15, 18, 24, and 36 Gy while maintaining 54 Gy to the posterior fossa was 77% (95% CI, 70%-81%), 78% (95% CI, 73%-81%), 79% (95% CI, 76%-82%), and 80% (95% CI, 77%-84%) respectively. The uncertainty in FFP was considerably larger for craniospinal doses below 18 Gy, reflecting the lack of data in the lower dose range. Conclusions: Estimates of tumor control and time-to-progression for standard-risk MB provides a data-driven setting for hypothesis generation or power calculations for prospective trials, taking the uncertainties into account. The presented methods can also be applied to incorporate further risk-stratification for example based on molecular biomarkers, when the necessary data become available.« less

  2. Advanced probabilistic methods for quantifying the effects of various uncertainties in structural response

    NASA Technical Reports Server (NTRS)

    Nagpal, Vinod K.

    1988-01-01

    The effects of actual variations, also called uncertainties, in geometry and material properties on the structural response of a space shuttle main engine turbopump blade are evaluated. A normal distribution was assumed to represent the uncertainties statistically. Uncertainties were assumed to be totally random, partially correlated, and fully correlated. The magnitude of these uncertainties were represented in terms of mean and variance. Blade responses, recorded in terms of displacements, natural frequencies, and maximum stress, was evaluated and plotted in the form of probabilistic distributions under combined uncertainties. These distributions provide an estimate of the range of magnitudes of the response and probability of occurrence of a given response. Most importantly, these distributions provide the information needed to estimate quantitatively the risk in a structural design.

  3. A Bayesian maximum entropy-based methodology for optimal spatiotemporal design of groundwater monitoring networks.

    PubMed

    Hosseini, Marjan; Kerachian, Reza

    2017-09-01

    This paper presents a new methodology for analyzing the spatiotemporal variability of water table levels and redesigning a groundwater level monitoring network (GLMN) using the Bayesian Maximum Entropy (BME) technique and a multi-criteria decision-making approach based on ordered weighted averaging (OWA). The spatial sampling is determined using a hexagonal gridding pattern and a new method, which is proposed to assign a removal priority number to each pre-existing station. To design temporal sampling, a new approach is also applied to consider uncertainty caused by lack of information. In this approach, different time lag values are tested by regarding another source of information, which is simulation result of a numerical groundwater flow model. Furthermore, to incorporate the existing uncertainties in available monitoring data, the flexibility of the BME interpolation technique is taken into account in applying soft data and improving the accuracy of the calculations. To examine the methodology, it is applied to the Dehgolan plain in northwestern Iran. Based on the results, a configuration of 33 monitoring stations for a regular hexagonal grid of side length 3600 m is proposed, in which the time lag between samples is equal to 5 weeks. Since the variance estimation errors of the BME method are almost identical for redesigned and existing networks, the redesigned monitoring network is more cost-effective and efficient than the existing monitoring network with 52 stations and monthly sampling frequency.

  4. Comparing photon and proton-based hypofractioned SBRT for prostate cancer accounting for robustness and realistic treatment deliverability.

    PubMed

    Goddard, Lee C; Brodin, N Patrik; Bodner, William R; Garg, Madhur K; Tomé, Wolfgang A

    2018-05-01

    To investigate whether photon or proton-based stereotactic body radiation therapy (SBRT is the preferred modality for high dose hypofractionation prostate cancer treatment. Achievable dose distributions were compared when uncertainties in target positioning and range uncertainties were appropriately accounted for. 10 patients with prostate cancer previously treated at our institution (Montefiore Medical Center) with photon SBRT using volumetric modulated arc therapy (VMAT) were identified. MRI images fused to the treatment planning CT allowed for accurate target and organ at risk (OAR) delineation. The clinical target volume was defined as the prostate gland plus the proximal seminal vesicles. Critical OARs include the bladder wall, bowel, femoral heads, neurovascular bundle, penile bulb, rectal wall, urethra and urogenital diaphragm. Photon plan robustness was evaluated by simulating 2 mm isotropic setup variations. Comparative proton SBRT plans employing intensity modulated proton therapy (IMPT) were generated using robust optimization. Plan robustness was evaluated by simulating 2 mm setup variations and 3% or 1% Hounsfield unit (HU) calibration uncertainties. Comparable maximum OAR doses are achievable between photon and proton SBRT, however, robust optimization results in higher maximum doses for proton SBRT. Rectal maximum doses are significantly higher for Robust proton SBRT with 1% HU uncertainty compared to photon SBRT (p = 0.03), whereas maximum doses were comparable for bladder wall (p = 0.43), urethra (p = 0.82) and urogenital diaphragm (p = 0.50). Mean doses to bladder and rectal wall are lower for proton SBRT, but higher for neurovascular bundle, urethra and urogenital diaphragm due to increased lateral scatter. Similar target conformality is achieved, albeit with slightly larger treated volume ratios for proton SBRT, >1.4 compared to 1.2 for photon SBRT. Similar treatment plans can be generated with IMPT compared to VMAT in terms of target coverage, target conformality, and OAR sparing when range and HU uncertainties are neglected. However, when accounting for these uncertainties during robust optimization, VMAT outperforms IMPT in terms of achievable target conformity and OAR sparing. Advances in knowledge: Comparison between achievable dose distributions using modern, robust optimization of IMPT for high dose per fraction SBRT regimens for the prostate has not been previously investigated.

  5. Estimation of Uncertainties in Stage-Discharge Curve for an Experimental Himalayan Watershed

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Sen, S.

    2016-12-01

    Various water resource projects developed on rivers originating from the Himalayan region, the "Water Tower of Asia", plays an important role on downstream development. Flow measurements at the desired river site are very critical for river engineers and hydrologists for water resources planning and management, flood forecasting, reservoir operation and flood inundation studies. However, an accurate discharge assessment of these mountainous rivers is costly, tedious and frequently dangerous to operators during flood events. Currently, in India, discharge estimation is linked to stage-discharge relationship known as rating curve. This relationship would be affected by a high degree of uncertainty. Estimating the uncertainty of rating curve remains a relevant challenge because it is not easy to parameterize. Main source of rating curve uncertainty are errors because of incorrect discharge measurement, variation in hydraulic conditions and depth measurement. In this study our objective is to obtain best parameters of rating curve that fit the limited record of observations and to estimate uncertainties at different depth obtained from rating curve. The rating curve parameters of standard power law are estimated for three different streams of Aglar watershed located in lesser Himalayas by maximum-likelihood estimator. Quantification of uncertainties in the developed rating curves is obtained from the estimate of variances and covariances of the rating curve parameters. Results showed that the uncertainties varied with catchment behavior with error varies between 0.006-1.831 m3/s. Discharge uncertainty in the Aglar watershed streams significantly depend on the extent of extrapolation outside the range of observed water levels. Extrapolation analysis confirmed that more than 15% for maximum discharges and 5% for minimum discharges are not strongly recommended for these mountainous gauging sites.

  6. Optimal regeneration planning for old-growth forest: addressing scientific uncertainty in endangered species recovery through adaptive management

    USGS Publications Warehouse

    Moore, C.T.; Conroy, M.J.

    2006-01-01

    Stochastic and structural uncertainties about forest dynamics present challenges in the management of ephemeral habitat conditions for endangered forest species. Maintaining critical foraging and breeding habitat for the endangered red-cockaded woodpecker (Picoides borealis) requires an uninterrupted supply of old-growth forest. We constructed and optimized a dynamic forest growth model for the Piedmont National Wildlife Refuge (Georgia, USA) with the objective of perpetuating a maximum stream of old-growth forest habitat. Our model accommodates stochastic disturbances and hardwood succession rates, and uncertainty about model structure. We produced a regeneration policy that was indexed by current forest state and by current weight of evidence among alternative model forms. We used adaptive stochastic dynamic programming, which anticipates that model probabilities, as well as forest states, may change through time, with consequent evolution of the optimal decision for any given forest state. In light of considerable uncertainty about forest dynamics, we analyzed a set of competing models incorporating extreme, but plausible, parameter values. Under any of these models, forest silviculture practices currently recommended for the creation of woodpecker habitat are suboptimal. We endorse fully adaptive approaches to the management of endangered species habitats in which predictive modeling, monitoring, and assessment are tightly linked.

  7. SU-F-T-301: Planar Dose Pass Rate Inflation Due to the MapCHECK Measurement Uncertainty Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, D; Spaans, J; Kumaraswamy, L

    Purpose: To quantify the effect of the Measurement Uncertainty function on planar dosimetry pass rates, as analyzed with Sun Nuclear Corporation analytic software (“MapCHECK” or “SNC Patient”). This optional function is toggled on by default upon software installation, and automatically increases the user-defined dose percent difference (%Diff) tolerance for each planar dose comparison. Methods: Dose planes from 109 IMRT fields and 40 VMAT arcs were measured with the MapCHECK 2 diode array, and compared to calculated planes from a commercial treatment planning system. Pass rates were calculated within the SNC analytic software using varying calculation parameters, including Measurement Uncertainty onmore » and off. By varying the %Diff criterion for each dose comparison performed with Measurement Uncertainty turned off, an effective %Diff criterion was defined for each field/arc corresponding to the pass rate achieved with MapCHECK Uncertainty turned on. Results: For 3%/3mm analysis, the Measurement Uncertainty function increases the user-defined %Diff by 0.8–1.1% average, depending on plan type and calculation technique, for an average pass rate increase of 1.0–3.5% (maximum +8.7%). For 2%, 2 mm analysis, the Measurement Uncertainty function increases the user-defined %Diff by 0.7–1.2% average, for an average pass rate increase of 3.5–8.1% (maximum +14.2%). The largest increases in pass rate are generally seen with poorly-matched planar dose comparisons; the MapCHECK Uncertainty effect is markedly smaller as pass rates approach 100%. Conclusion: The Measurement Uncertainty function may substantially inflate planar dose comparison pass rates for typical IMRT and VMAT planes. The types of uncertainties incorporated into the function (and their associated quantitative estimates) as described in the software user’s manual may not accurately estimate realistic measurement uncertainty for the user’s measurement conditions. Pass rates listed in published reports or otherwise compared to the results of other users or vendors should clearly indicate whether the Measurement Uncertainty function is used.« less

  8. Sensitivity and uncertainty in crop water footprint accounting: a case study for the Yellow River basin

    NASA Astrophysics Data System (ADS)

    Zhuo, L.; Mekonnen, M. M.; Hoekstra, A. Y.

    2014-06-01

    Water Footprint Assessment is a fast-growing field of research, but as yet little attention has been paid to the uncertainties involved. This study investigates the sensitivity of and uncertainty in crop water footprint (in m3 t-1) estimates related to uncertainties in important input variables. The study focuses on the green (from rainfall) and blue (from irrigation) water footprint of producing maize, soybean, rice, and wheat at the scale of the Yellow River basin in the period 1996-2005. A grid-based daily water balance model at a 5 by 5 arcmin resolution was applied to compute green and blue water footprints of the four crops in the Yellow River basin in the period considered. The one-at-a-time method was carried out to analyse the sensitivity of the crop water footprint to fractional changes of seven individual input variables and parameters: precipitation (PR), reference evapotranspiration (ET0), crop coefficient (Kc), crop calendar (planting date with constant growing degree days), soil water content at field capacity (Smax), yield response factor (Ky) and maximum yield (Ym). Uncertainties in crop water footprint estimates related to uncertainties in four key input variables: PR, ET0, Kc, and crop calendar were quantified through Monte Carlo simulations. The results show that the sensitivities and uncertainties differ across crop types. In general, the water footprint of crops is most sensitive to ET0 and Kc, followed by the crop calendar. Blue water footprints were more sensitive to input variability than green water footprints. The smaller the annual blue water footprint is, the higher its sensitivity to changes in PR, ET0, and Kc. The uncertainties in the total water footprint of a crop due to combined uncertainties in climatic inputs (PR and ET0) were about ±20% (at 95% confidence interval). The effect of uncertainties in ET0was dominant compared to that of PR. The uncertainties in the total water footprint of a crop as a result of combined key input uncertainties were on average ±30% (at 95% confidence level).

  9. Modelling uncertainties and possible future trends of precipitation and temperature for 10 sub-basins in Columbia River Basin (CRB)

    NASA Astrophysics Data System (ADS)

    Ahmadalipour, A.; Rana, A.; Qin, Y.; Moradkhani, H.

    2014-12-01

    Trends and changes in future climatic parameters, such as, precipitation and temperature have been a central part of climate change studies. In the present work, we have analyzed the seasonal and yearly trends and uncertainties of prediction in all the 10 sub-basins of Columbia River Basin (CRB) for future time period of 2010-2099. The work is carried out using 2 different sets of statistically downscaled Global Climate Model (GCMs) projection datasets i.e. Bias correction and statistical downscaling (BCSD) generated at Portland State University and The Multivariate Adaptive Constructed Analogs (MACA) generated at University of Idaho. The analysis is done for with 10 GCM downscaled products each from CMIP5 daily dataset totaling to 40 different downscaled products for robust analysis. Summer, winter and yearly trend analysis is performed for all the 10 sub-basins using linear regression (significance tested by student t test) and Mann Kendall test (0.05 percent significance level), for precipitation (P), temperature maximum (Tmax) and temperature minimum (Tmin). Thereafter, all the parameters are modelled for uncertainty, across all models, in all the 10 sub-basins and across the CRB for future scenario periods. Results have indicated in varied degree of trends for all the sub-basins, mostly pointing towards a significant increase in all three climatic parameters, for all the seasons and yearly considerations. Uncertainty analysis have reveled very high change in all the parameters across models and sub-basins under consideration. Basin wide uncertainty analysis is performed to corroborate results from smaller, sub-basin scale. Similar trends and uncertainties are reported on the larger scale as well. Interestingly, both trends and uncertainties are higher during winter period than during summer, contributing to large part of the yearly change.

  10. Quantifying uncertainties in the structural response of SSME blades

    NASA Technical Reports Server (NTRS)

    Nagpal, Vinod K.

    1987-01-01

    To quantify the uncertainties associated with the geometry and material properties of a Space Shuttle Main Engine (SSME) turbopump blade, a computer code known as STAEBL was used. A finite element model of the blade used 80 triangular shell elements with 55 nodes and five degrees of freedom per node. The whole study was simulated on the computer and no real experiments were conducted. The structural response has been evaluated in terms of three variables which are natural frequencies, root (maximum) stress, and blade tip displacements. The results of the study indicate that only the geometric uncertainties have significant effects on the response. Uncertainties in material properties have insignificant effects.

  11. Impact of geometric uncertainties on dose calculations for intensity modulated radiation therapy of prostate cancer

    NASA Astrophysics Data System (ADS)

    Jiang, Runqing

    Intensity-modulated radiation therapy (IMRT) uses non-uniform beam intensities within a radiation field to provide patient-specific dose shaping, resulting in a dose distribution that conforms tightly to the planning target volume (PTV). Unavoidable geometric uncertainty arising from patient repositioning and internal organ motion can lead to lower conformality index (CI) during treatment delivery, a decrease in tumor control probability (TCP) and an increase in normal tissue complication probability (NTCP). The CI of the IMRT plan depends heavily on steep dose gradients between the PTV and organ at risk (OAR). Geometric uncertainties reduce the planned dose gradients and result in a less steep or "blurred" dose gradient. The blurred dose gradients can be maximized by constraining the dose objective function in the static IMRT plan or by reducing geometric uncertainty during treatment with corrective verification imaging. Internal organ motion and setup error were evaluated simultaneously for 118 individual patients with implanted fiducials and MV electronic portal imaging (EPI). A Gaussian probability density function (PDF) is reasonable for modeling geometric uncertainties as indicated by the 118 patients group. The Gaussian PDF is patient specific and group standard deviation (SD) should not be used for accurate treatment planning for individual patients. In addition, individual SD should not be determined or predicted from small imaging samples because of random nature of the fluctuations. Frequent verification imaging should be employed in situations where geometric uncertainties are expected. Cumulative PDF data can be used for re-planning to assess accuracy of delivered dose. Group data is useful for determining worst case discrepancy between planned and delivered dose. The margins for the PTV should ideally represent true geometric uncertainties. The measured geometric uncertainties were used in this thesis to assess PTV coverage, dose to OAR, equivalent uniform dose per fraction (EUDf) and NTCP. The dose distribution including geometric uncertainties was determined from integration of the convolution of the static dose gradient with the PDF. Integration of the convolution of the static dose and derivative of the PDF can also be used to determine the dose including geometric uncertainties although this method was not investigated in detail. Local maximum dose gradient (LMDG) was determined via optimization of dose objective function by manually adjusting DVH control points or selecting beam numbers and directions during IMRT treatment planning. Minimum SD (SDmin) is used when geometric uncertainty is corrected with verification imaging. Maximum SD (SDmax) is used when the geometric uncertainty is known to be large and difficult to manage. SDmax was 4.38 mm in anterior-posterior (AP) direction, 2.70 mm in left-right (LR) direction and 4.35 mm in superior-inferior (SI) direction; SDmin was 1.1 mm in all three directions if less than 2 mm threshold was used for uncorrected fractions in every direction. EUDf is a useful QA parameter for interpreting the biological impact of geometric uncertainties on the static dose distribution. The EUD f has been used as the basis for the time-course NTCP evaluation in the thesis. Relative NTCP values are useful for comparative QA checking by normalizing known complications (e.g. reported in the RTOG studies) to specific DVH control points. For prostate cancer patients, rectal complications were evaluated from specific RTOG clinical trials and detailed evaluation of the treatment techniques (e.g. dose prescription, DVH, number of beams, bean angles). Treatment plans that did not meet DVH constraints represented additional complication risk. Geometric uncertainties improved or worsened rectal NTCP depending on individual internal organ motion within patient.

  12. A robust method to forecast volcanic ash clouds

    USGS Publications Warehouse

    Denlinger, Roger P.; Pavolonis, Mike; Sieglaff, Justin

    2012-01-01

    Ash clouds emanating from volcanic eruption columns often form trails of ash extending thousands of kilometers through the Earth's atmosphere, disrupting air traffic and posing a significant hazard to air travel. To mitigate such hazards, the community charged with reducing flight risk must accurately assess risk of ash ingestion for any flight path and provide robust forecasts of volcanic ash dispersal. In response to this need, a number of different transport models have been developed for this purpose and applied to recent eruptions, providing a means to assess uncertainty in forecasts. Here we provide a framework for optimal forecasts and their uncertainties given any model and any observational data. This involves random sampling of the probability distributions of input (source) parameters to a transport model and iteratively running the model with different inputs, each time assessing the predictions that the model makes about ash dispersal by direct comparison with satellite data. The results of these comparisons are embodied in a likelihood function whose maximum corresponds to the minimum misfit between model output and observations. Bayes theorem is then used to determine a normalized posterior probability distribution and from that a forecast of future uncertainty in ash dispersal. The nature of ash clouds in heterogeneous wind fields creates a strong maximum likelihood estimate in which most of the probability is localized to narrow ranges of model source parameters. This property is used here to accelerate probability assessment, producing a method to rapidly generate a prediction of future ash concentrations and their distribution based upon assimilation of satellite data as well as model and data uncertainties. Applying this method to the recent eruption of Eyjafjallajökull in Iceland, we show that the 3 and 6 h forecasts of ash cloud location probability encompassed the location of observed satellite-determined ash cloud loads, providing an efficient means to assess all of the hazards associated with these ash clouds.

  13. Path planning in uncertain flow fields using ensemble method

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Le Maître, Olivier P.; Hoteit, Ibrahim; Knio, Omar M.

    2016-10-01

    An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.

  14. Calculation of airborne radioactivity in a Technegas lung ventilation unit.

    PubMed

    López Medina, A; Miñano, J A; Terrón, J A; Bullejos, J A; Guerrero, R; Arroyo, T; Ramírez, A; Llamas, J M

    1999-12-01

    Airborne contamination by 99Tcm has been monitored in the Nuclear Medicine Department in our hospital to assess the risk of internal contamination to occupational workers exposed to Technegas studies. An air sampler fitted with a membrane filter was used. The optimum time for air absorption for obtaining the maximum activity in the filter was calculated. Maximum activity in the membrane filter ensures minimum uncertainty, which is especially important when low-level activities are being measured. The optimum time depends on air absorption velocity, room volume and filter efficiency for isotope collection. It tends to 1/lambda (lambda = disintegration constant for 99Tcm) for large volume and low velocity. Room activity with the air pump switched on was related to filter activity, and its variation with time was studied. Free activity in air for each study was approximately 7 x 10(-4) the activity used, and the effective half-life of the isotope in the room was 13.9 min (decay and diffusion). For a typical study (630 MBq), the effective dose to staff was 0.01 microSv when in the room for 10 min.

  15. Bayesian Monte Carlo and Maximum Likelihood Approach for ...

    EPA Pesticide Factsheets

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood estimation (BMCML) to calibrate a lake oxygen recovery model. We first derive an analytical solution of the differential equation governing lake-averaged oxygen dynamics as a function of time-variable wind speed. Statistical inferences on model parameters and predictive uncertainty are then drawn by Bayesian conditioning of the analytical solution on observed daily wind speed and oxygen concentration data obtained from an earlier study during two recovery periods on a eutrophic lake in upper state New York. The model is calibrated using oxygen recovery data for one year and statistical inferences were validated using recovery data for another year. Compared with essentially two-step, regression and optimization approach, the BMCML results are more comprehensive and performed relatively better in predicting the observed temporal dissolved oxygen levels (DO) in the lake. BMCML also produced comparable calibration and validation results with those obtained using popular Markov Chain Monte Carlo technique (MCMC) and is computationally simpler and easier to implement than the MCMC. Next, using the calibrated model, we derive an optimal relationship between liquid film-transfer coefficien

  16. A Maximum Likelihood Approach to Determine Sensor Radiometric Response Coefficients for NPP VIIRS Reflective Solar Bands

    NASA Technical Reports Server (NTRS)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-01-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  17. Cosmological horizons, uncertainty principle, and maximum length quantum mechanics

    NASA Astrophysics Data System (ADS)

    Perivolaropoulos, L.

    2017-05-01

    The cosmological particle horizon is the maximum measurable length in the Universe. The existence of such a maximum observable length scale implies a modification of the quantum uncertainty principle. Thus due to nonlocality of quantum mechanics, the global properties of the Universe could produce a signature on the behavior of local quantum systems. A generalized uncertainty principle (GUP) that is consistent with the existence of such a maximum observable length scale lmax is Δ x Δ p ≥ℏ2/1/1 -α Δ x2 where α =lmax-2≃(H0/c )2 (H0 is the Hubble parameter and c is the speed of light). In addition to the existence of a maximum measurable length lmax=1/√{α }, this form of GUP implies also the existence of a minimum measurable momentum pmin=3/√{3 } 4 ℏ√{α }. Using appropriate representation of the position and momentum quantum operators we show that the spectrum of the one-dimensional harmonic oscillator becomes E¯n=2 n +1 +λnα ¯ where E¯n≡2 En/ℏω is the dimensionless properly normalized n th energy level, α ¯ is a dimensionless parameter with α ¯≡α ℏ/m ω and λn˜n2 for n ≫1 (we show the full form of λn in the text). For a typical vibrating diatomic molecule and lmax=c /H0 we find α ¯˜10-77 and therefore for such a system, this effect is beyond the reach of current experiments. However, this effect could be more important in the early Universe and could produce signatures in the primordial perturbation spectrum induced by quantum fluctuations of the inflaton field.

  18. Evaluating sources of uncertainties in finite-fault source models: lessons from the 2009 Mw6.1 L'Aquila earthquake, Italy

    NASA Astrophysics Data System (ADS)

    Ragon, T.; Sladen, A.; Bletery, Q.; Simons, M.; Magnoni, F.; Avallone, A.; Cavalié, O.; Vergnolle, M.

    2016-12-01

    Despite the diversity of available data for the Mw 6.1 2009 earthquake in L'Aquila, Italy, published finite fault slip models are surprisingly different. For instance, the amplitude of the maximum coseismic slip patch varies from 80cm to 225cm, and its depth oscillates between 5 and 15km. Discrepancies between proposed source parameters are believed to result from three sources: observational uncertainties, epistemic uncertainties, and the inherent non-uniqueness of inverse problems. We explore the whole solution space of fault-slip models compatible with the data within the range of both observational and epistemic uncertainties by performing a fully Bayesian analysis. In this initial stage, we restrict our analysis to the static problem.In terms of observation uncertainty, we must take into account the difference in time span associated with the different data types: InSAR images provide excellent spatial coverage but usually correspond to a period of a few days to weeks after the mainshock and can thus be potentially biased by significant afterslip. Continuous GPS stations do not have the same shortcoming, but in contrast do not have the desired spatial coverage near the fault. In the case of the L'Aquila earthquake, InSAR images include a minimum of 6 days of afterslip. Here, we explicitly account for these different time windows in the inversion by jointly inverting for coseismic and post-seismic fault slip. Regarding epistemic or modeling uncertainties, we focus on the impact of uncertain fault geometry and elastic structure. Modeling errors, which result from inaccurate model predictions and are generally neglected, are estimated for both earth model and fault geometry as non-diagonal covariance matrices. The L'Aquila earthquake is particularly suited to investigation of these effects given the availability of a detailed aftershock catalog and 3D velocity models. This work aims at improving our knowledge of the L'Aquila earthquake as well as at providing a more general perspective on which uncertainties are the most critical in finite-fault source studies.

  19. A continuous-time adaptive particle filter for estimations under measurement time uncertainties with an application to a plasma-leucine mixed effects model

    PubMed Central

    2013-01-01

    Background When mathematical modelling is applied to many different application areas, a common task is the estimation of states and parameters based on measurements. With this kind of inference making, uncertainties in the time when the measurements have been taken are often neglected, but especially in applications taken from the life sciences, this kind of errors can considerably influence the estimation results. As an example in the context of personalized medicine, the model-based assessment of the effectiveness of drugs is becoming to play an important role. Systems biology may help here by providing good pharmacokinetic and pharmacodynamic (PK/PD) models. Inference on these systems based on data gained from clinical studies with several patient groups becomes a major challenge. Particle filters are a promising approach to tackle these difficulties but are by itself not ready to handle uncertainties in measurement times. Results In this article, we describe a variant of the standard particle filter (PF) algorithm which allows state and parameter estimation with the inclusion of measurement time uncertainties (MTU). The modified particle filter, which we call MTU-PF, also allows the application of an adaptive stepsize choice in the time-continuous case to avoid degeneracy problems. The modification is based on the model assumption of uncertain measurement times. While the assumption of randomness in the measurements themselves is common, the corresponding measurement times are generally taken as deterministic and exactly known. Especially in cases where the data are gained from measurements on blood or tissue samples, a relatively high uncertainty in the true measurement time seems to be a natural assumption. Our method is appropriate in cases where relatively few data are used from a relatively large number of groups or individuals, which introduce mixed effects in the model. This is a typical setting of clinical studies. We demonstrate the method on a small artificial example and apply it to a mixed effects model of plasma-leucine kinetics with data from a clinical study which included 34 patients. Conclusions Comparisons of our MTU-PF with the standard PF and with an alternative Maximum Likelihood estimation method on the small artificial example clearly show that the MTU-PF obtains better estimations. Considering the application to the data from the clinical study, the MTU-PF shows a similar performance with respect to the quality of estimated parameters compared with the standard particle filter, but besides that, the MTU algorithm shows to be less prone to degeneration than the standard particle filter. PMID:23331521

  20. Generalized Maximum Entropy

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  1. A channel dynamics model for real-time flood forecasting

    USGS Publications Warehouse

    Hoos, Anne B.; Koussis, Antonis D.; Beale, Guy O.

    1989-01-01

    A new channel dynamics scheme (alternative system predictor in real time (ASPIRE)), designed specifically for real-time river flow forecasting, is introduced to reduce uncertainty in the forecast. ASPIRE is a storage routing model that limits the influence of catchment model forecast errors to the downstream station closest to the catchment. Comparisons with the Muskingum routing scheme in field tests suggest that the ASPIRE scheme can provide more accurate forecasts, probably because discharge observations are used to a maximum advantage and routing reaches (and model errors in each reach) are uncoupled. Using ASPIRE in conjunction with the Kalman filter did not improve forecast accuracy relative to a deterministic updating procedure. Theoretical analysis suggests that this is due to a large process noise to measurement noise ratio.

  2. Improvements to Earthquake Location with a Fuzzy Logic Approach

    NASA Astrophysics Data System (ADS)

    Gökalp, Hüseyin

    2018-01-01

    In this study, improvements to the earthquake location method were investigated using a fuzzy logic approach proposed by Lin and Sanford (Bull Seismol Soc Am 91:82-93, 2001). The method has certain advantages compared to the inverse methods in terms of eliminating the uncertainties of arrival times and reading errors. In this study, adopting this approach, epicentral locations were determined based on the results of a fuzzy logic space concerning the uncertainties in the velocity models. To map the uncertainties in arrival times into the fuzzy logic space, a trapezoidal membership function was constructed by directly using the travel time difference between the two stations for the P- and S-arrival times instead of the P- and S-wave models to eliminate the need for obtaining information concerning the velocity structure of the study area. The results showed that this method worked most effectively when earthquakes occurred away from a network or when the arrival time data contained phase reading errors. In this study, to resolve the problems related to determining the epicentral locations of the events, a forward modeling method like the grid search technique was used by applying different logical operations (i.e., intersection, union, and their combination) with a fuzzy logic approach. The locations of the events were depended on results of fuzzy logic outputs in fuzzy logic space by searching in a gridded region. The process of location determination with the defuzzification of only the grid points with the membership value of 1 obtained by normalizing all the maximum fuzzy output values of the highest values resulted in more reliable epicentral locations for the earthquakes than the other approaches. In addition, throughout the process, the center-of-gravity method was used as a defuzzification operation.

  3. Towards robust quantification and reduction of uncertainty in hydrologic predictions: Integration of particle Markov chain Monte Carlo and factorial polynomial chaos expansion

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Baetz, B. W.; Ancell, B. C.

    2017-05-01

    The particle filtering techniques have been receiving increasing attention from the hydrologic community due to its ability to properly estimate model parameters and states of nonlinear and non-Gaussian systems. To facilitate a robust quantification of uncertainty in hydrologic predictions, it is necessary to explicitly examine the forward propagation and evolution of parameter uncertainties and their interactions that affect the predictive performance. This paper presents a unified probabilistic framework that merges the strengths of particle Markov chain Monte Carlo (PMCMC) and factorial polynomial chaos expansion (FPCE) algorithms to robustly quantify and reduce uncertainties in hydrologic predictions. A Gaussian anamorphosis technique is used to establish a seamless bridge between the data assimilation using the PMCMC and the uncertainty propagation using the FPCE through a straightforward transformation of posterior distributions of model parameters. The unified probabilistic framework is applied to the Xiangxi River watershed of the Three Gorges Reservoir (TGR) region in China to demonstrate its validity and applicability. Results reveal that the degree of spatial variability of soil moisture capacity is the most identifiable model parameter with the fastest convergence through the streamflow assimilation process. The potential interaction between the spatial variability in soil moisture conditions and the maximum soil moisture capacity has the most significant effect on the performance of streamflow predictions. In addition, parameter sensitivities and interactions vary in magnitude and direction over time due to temporal and spatial dynamics of hydrologic processes.

  4. Addressing forecast uncertainty impact on CSP annual performance

    NASA Astrophysics Data System (ADS)

    Ferretti, Fabio; Hogendijk, Christopher; Aga, Vipluv; Ehrsam, Andreas

    2017-06-01

    This work analyzes the impact of weather forecast uncertainty on the annual performance of a Concentrated Solar Power (CSP) plant. Forecast time series has been produced by a commercial forecast provider using the technique of hindcasting for the full year 2011 in hourly resolution for Ouarzazate, Morocco. Impact of forecast uncertainty has been measured on three case studies, representing typical tariff schemes observed in recent CSP projects plus a spot market price scenario. The analysis has been carried out using an annual performance model and a standard dispatch optimization algorithm based on dynamic programming. The dispatch optimizer has been demonstrated to be a key requisite to maximize the annual revenues depending on the price scenario, harvesting the maximum potential out of the CSP plant. Forecasting uncertainty affects the revenue enhancement outcome of a dispatch optimizer depending on the error level and the price function. Results show that forecasting accuracy of direct solar irradiance (DNI) is important to make best use of an optimized dispatch but also that a higher number of calculation updates can partially compensate this uncertainty. Improvement in revenues can be significant depending on the price profile and the optimal operation strategy. Pathways to achieve better performance are presented by having more updates both by repeatedly generating new optimized trajectories but also more often updating weather forecasts. This study shows the importance of working on DNI weather forecasting for revenue enhancement as well as selecting weather services that can provide multiple updates a day and probabilistic forecast information.

  5. A multi-objective optimization model for hub network design under uncertainty: An inexact rough-interval fuzzy approach

    NASA Astrophysics Data System (ADS)

    Niakan, F.; Vahdani, B.; Mohammadi, M.

    2015-12-01

    This article proposes a multi-objective mixed-integer model to optimize the location of hubs within a hub network design problem under uncertainty. The considered objectives include minimizing the maximum accumulated travel time, minimizing the total costs including transportation, fuel consumption and greenhouse emissions costs, and finally maximizing the minimum service reliability. In the proposed model, it is assumed that for connecting two nodes, there are several types of arc in which their capacity, transportation mode, travel time, and transportation and construction costs are different. Moreover, in this model, determining the capacity of the hubs is part of the decision-making procedure and balancing requirements are imposed on the network. To solve the model, a hybrid solution approach is utilized based on inexact programming, interval-valued fuzzy programming and rough interval programming. Furthermore, a hybrid multi-objective metaheuristic algorithm, namely multi-objective invasive weed optimization (MOIWO), is developed for the given problem. Finally, various computational experiments are carried out to assess the proposed model and solution approaches.

  6. PEBBED Uncertainty and Sensitivity Analysis of the CRP-5 PBMR DLOFC Transient Benchmark with the SUSA Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom

    2011-01-01

    The need for a defendable and systematic uncertainty and sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008. The GRS (Gesellschaft für Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This report summarized the results of the initial investigations performed with SUSA,more » utilizing a typical High Temperature Reactor benchmark (the IAEA CRP-5 PBMR 400MW Exercise 2) and the PEBBED-THERMIX suite of codes. The following steps were performed as part of the uncertainty and sensitivity analysis: 1. Eight PEBBED-THERMIX model input parameters were selected for inclusion in the uncertainty study: the total reactor power, inlet gas temperature, decay heat, and the specific heat capability and thermal conductivity of the fuel, pebble bed and reflector graphite. 2. The input parameters variations and probability density functions were specified, and a total of 800 PEBBED-THERMIX model calculations were performed, divided into 4 sets of 100 and 2 sets of 200 Steady State and Depressurized Loss of Forced Cooling (DLOFC) transient calculations each. 3. The steady state and DLOFC maximum fuel temperature, as well as the daily pebble fuel load rate data, were supplied to SUSA as model output parameters of interest. The 6 data sets were statistically analyzed to determine the 5% and 95% percentile values for each of the 3 output parameters with a 95% confidence level, and typical statistical indictors were also generated (e.g. Kendall, Pearson and Spearman coefficients). 4. A SUSA sensitivity study was performed to obtain correlation data between the input and output parameters, and to identify the primary contributors to the output data uncertainties. It was found that the uncertainties in the decay heat, pebble bed and reflector thermal conductivities were responsible for the bulk of the propagated uncertainty in the DLOFC maximum fuel temperature. It was also determined that the two standard deviation (2s) uncertainty on the maximum fuel temperature was between ±58oC (3.6%) and ±76oC (4.7%) on a mean value of 1604 oC. These values mostly depended on the selection of the distributions types, and not on the number of model calculations above the required Wilks criteria (a (95%,95%) statement would usually require 93 model runs).« less

  7. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

    PubMed Central

    Diwadkar, Amit; Vaidya, Umesh

    2016-01-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994

  8. Application of the Extreme Value Distribution to Estimate the Uncertainty of Peak Sound Pressure Levels at the Workplace.

    PubMed

    Lenzuni, Paolo

    2015-07-01

    The purpose of this article is to develop a method for the statistical inference of the maximum peak sound pressure level and of the associated uncertainty. Both quantities are requested by the EU directive 2003/10/EC for a complete and solid assessment of the noise exposure at the workplace. Based on the characteristics of the sound pressure waveform, it is hypothesized that the distribution of the measured peak sound pressure levels follows the extreme value distribution. The maximum peak level is estimated as the largest member of a finite population following this probability distribution. The associated uncertainty is also discussed, taking into account not only the contribution due to the incomplete sampling but also the contribution due to the finite precision of the instrumentation. The largest of the set of measured peak levels underestimates the maximum peak sound pressure level. The underestimate can be as large as 4 dB if the number of measurements is limited to 3-4, which is common practice in occupational noise assessment. The extended uncertainty is also quite large (~2.5 dB), with a weak dependence on the sampling details. Following the procedure outlined in this article, a reliable comparison between the peak sound pressure levels measured in a workplace and the EU directive action limits is possible. Non-compliance can occur even when the largest of the set of measured peak levels is several dB below such limits. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  9. Demonstration of risk based, goal driven framework for hydrological field campaigns and inverse modeling with case studies

    NASA Astrophysics Data System (ADS)

    Harken, B.; Geiges, A.; Rubin, Y.

    2013-12-01

    There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and forward modeling and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration, plume travel time, or aquifer recharge rate. These predictions often have significant bearing on some decision that must be made. Examples include: how to allocate limited remediation resources between multiple contaminated groundwater sites, where to place a waste repository site, and what extraction rates can be considered sustainable in an aquifer. Providing an answer to these questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in model parameters, such as hydraulic conductivity, leads to uncertainty in EPM predictions. Often, field campaigns and inverse modeling efforts are planned and undertaken with reduction of parametric uncertainty as the objective. The tool of hypothesis testing allows this to be taken one step further by considering uncertainty reduction in the ultimate prediction of the EPM as the objective and gives a rational basis for weighing costs and benefits at each stage. When using the tool of statistical hypothesis testing, the EPM is cast into a binary outcome. This is formulated as null and alternative hypotheses, which can be accepted and rejected with statistical formality. When accounting for all sources of uncertainty at each stage, the level of significance of this test provides a rational basis for planning, optimization, and evaluation of the entire campaign. Case-specific information, such as consequences prediction error and site-specific costs can be used in establishing selection criteria based on what level of risk is deemed acceptable. This framework is demonstrated and discussed using various synthetic case studies. The case studies involve contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a given location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical value of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. Different field campaigns are analyzed based on effectiveness in reducing the probability of selecting the wrong hypothesis, which in this case corresponds to reducing uncertainty in the prediction of plume arrival time. To examine the role of inverse modeling in this framework, case studies involving both Maximum Likelihood parameter estimation and Bayesian inversion are used.

  10. Reusable launch vehicle model uncertainties impact analysis

    NASA Astrophysics Data System (ADS)

    Chen, Jiaye; Mu, Rongjun; Zhang, Xin; Deng, Yanpeng

    2018-03-01

    Reusable launch vehicle(RLV) has the typical characteristics of complex aerodynamic shape and propulsion system coupling, and the flight environment is highly complicated and intensely changeable. So its model has large uncertainty, which makes the nominal system quite different from the real system. Therefore, studying the influences caused by the uncertainties on the stability of the control system is of great significance for the controller design. In order to improve the performance of RLV, this paper proposes the approach of analyzing the influence of the model uncertainties. According to the typical RLV, the coupling dynamic and kinematics models are built. Then different factors that cause uncertainties during building the model are analyzed and summed up. After that, the model uncertainties are expressed according to the additive uncertainty model. Choosing the uncertainties matrix's maximum singular values as the boundary model, and selecting the uncertainties matrix's norm to show t how much the uncertainty factors influence is on the stability of the control system . The simulation results illustrate that the inertial factors have the largest influence on the stability of the system, and it is necessary and important to take the model uncertainties into consideration before the designing the controller of this kind of aircraft( like RLV, etc).

  11. Quantum entropy and uncertainty for two-mode squeezed, coherent and intelligent spin states

    NASA Technical Reports Server (NTRS)

    Aragone, C.; Mundarain, D.

    1993-01-01

    We compute the quantum entropy for monomode and two-mode systems set in squeezed states. Thereafter, the quantum entropy is also calculated for angular momentum algebra when the system is either in a coherent or in an intelligent spin state. These values are compared with the corresponding values of the respective uncertainties. In general, quantum entropies and uncertainties have the same minimum and maximum points. However, for coherent and intelligent spin states, it is found that some minima for the quantum entropy turn out to be uncertainty maxima. We feel that the quantum entropy we use provides the right answer, since it is given in an essentially unique way.

  12. Fast and Efficient Stochastic Optimization for Analytic Continuation

    DOE PAGES

    Bao, Feng; Zhang, Guannan; Webster, Clayton G; ...

    2016-09-28

    In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results.more » In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.« less

  13. Number-phase minimum-uncertainty state with reduced number uncertainty in a Kerr nonlinear interferometer

    NASA Astrophysics Data System (ADS)

    Kitagawa, M.; Yamamoto, Y.

    1987-11-01

    An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.

  14. An analysis and demonstration of clock synchronization by VLBI

    NASA Technical Reports Server (NTRS)

    Hurd, W. J.

    1972-01-01

    A prototype of a semireal-time system for synchronizing the DSN station clocks by radio interferometry was successfully demonstrated. The system utilized an approximate maximum likelihood estimation procedure for processing the data, thereby achieving essentially optimum time synchronization estimates for a given amount of data, or equivalently, minimizing the amount of data required for reliable estimation. Synchronization accuracies as good as 100 nsec rms were achieved between DSS 11 and DSS 12, both at Goldstone, California. The accuracy can be improved by increasing the system bandwidth until the fundamental limitations due to position uncertainties of baseline and source and atmospheric effects are reached. These limitations are under ten nsec for transcontinental baselines.

  15. Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle

    Treesearch

    Shoufan Fang; George Z. Gertner

    2000-01-01

    When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...

  16. A Bayesian-Based Novel Methodology to Generate Reliable Site Response Mapping Sensitive to Data Uncertainties

    NASA Astrophysics Data System (ADS)

    Chakraborty, A.; Goto, H.

    2017-12-01

    The 2011 off the Pacific coast of Tohoku earthquake caused severe damage in many areas further inside the mainland because of site-amplification. Furukawa district in Miyagi Prefecture, Japan recorded significant spatial differences in ground motion even at sub-kilometer scales. The site responses in the damage zone far exceeded the levels in the hazard maps. A reason why the mismatch occurred is that mapping follow only the mean value at the measurement locations with no regard to the data uncertainties and thus are not always reliable. Our research objective is to develop a methodology to incorporate data uncertainties in mapping and propose a reliable map. The methodology is based on a hierarchical Bayesian modeling of normally-distributed site responses in space where the mean (μ), site-specific variance (σ2) and between-sites variance(s2) parameters are treated as unknowns with a prior distribution. The observation data is artificially created site responses with varying means and variances for 150 seismic events across 50 locations in one-dimensional space. Spatially auto-correlated random effects were added to the mean (μ) using a conditionally autoregressive (CAR) prior. The inferences on the unknown parameters are done using Markov Chain Monte Carlo methods from the posterior distribution. The goal is to find reliable estimates of μ sensitive to uncertainties. During initial trials, we observed that the tau (=1/s2) parameter of CAR prior controls the μ estimation. Using a constraint, s = 1/(k×σ), five spatial models with varying k-values were created. We define reliability to be measured by the model likelihood and propose the maximum likelihood model to be highly reliable. The model with maximum likelihood was selected using a 5-fold cross-validation technique. The results show that the maximum likelihood model (μ*) follows the site-specific mean at low uncertainties and converges to the model-mean at higher uncertainties (Fig.1). This result is highly significant as it successfully incorporates the effect of data uncertainties in mapping. This novel approach can be applied to any research field using mapping techniques. The methodology is now being applied to real records from a very dense seismic network in Furukawa district, Miyagi Prefecture, Japan to generate a reliable map of the site responses.

  17. Eddy-covariance data with low signal-to-noise ratio: time-lag determination, uncertainties and limit of detection

    NASA Astrophysics Data System (ADS)

    Langford, B.; Acton, W.; Ammann, C.; Valach, A.; Nemitz, E.

    2015-10-01

    All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. Here, we are applying a consistent approach based on auto- and cross-covariance functions to quantify the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining data sets from several analysers and using simulations, we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time lag eliminates these effects (provided the time lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time lag. Finally, we make recommendations for the analysis and reporting of data with low signal-to-noise and their associated errors.

  18. Eddy-covariance data with low signal-to-noise ratio: time-lag determination, uncertainties and limit of detection

    NASA Astrophysics Data System (ADS)

    Langford, B.; Acton, W.; Ammann, C.; Valach, A.; Nemitz, E.

    2015-03-01

    All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. We are here applying a consistent approach based on auto- and cross-covariance functions to quantifying the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time-lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time-lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining datasets from several analysers and using simulations we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time-lag eliminates these effects (provided the time-lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time-lag. Finally, we make recommendations for the analysis and reporting of data with low signal-to-noise and their associated errors.

  19. Application of at-site peak-streamflow frequency analyses for very low annual exceedance probabilities

    USGS Publications Warehouse

    Asquith, William H.; Kiang, Julie E.; Cohn, Timothy A.

    2017-07-17

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Nuclear Regulatory Commission, has investigated statistical methods for probabilistic flood hazard assessment to provide guidance on very low annual exceedance probability (AEP) estimation of peak-streamflow frequency and the quantification of corresponding uncertainties using streamgage-specific data. The term “very low AEP” implies exceptionally rare events defined as those having AEPs less than about 0.001 (or 1 × 10–3 in scientific notation or for brevity 10–3). Such low AEPs are of great interest to those involved with peak-streamflow frequency analyses for critical infrastructure, such as nuclear power plants. Flood frequency analyses at streamgages are most commonly based on annual instantaneous peak streamflow data and a probability distribution fit to these data. The fitted distribution provides a means to extrapolate to very low AEPs. Within the United States, the Pearson type III probability distribution, when fit to the base-10 logarithms of streamflow, is widely used, but other distribution choices exist. The USGS-PeakFQ software, implementing the Pearson type III within the Federal agency guidelines of Bulletin 17B (method of moments) and updates to the expected moments algorithm (EMA), was specially adapted for an “Extended Output” user option to provide estimates at selected AEPs from 10–3 to 10–6. Parameter estimation methods, in addition to product moments and EMA, include L-moments, maximum likelihood, and maximum product of spacings (maximum spacing estimation). This study comprehensively investigates multiple distributions and parameter estimation methods for two USGS streamgages (01400500 Raritan River at Manville, New Jersey, and 01638500 Potomac River at Point of Rocks, Maryland). The results of this study specifically involve the four methods for parameter estimation and up to nine probability distributions, including the generalized extreme value, generalized log-normal, generalized Pareto, and Weibull. Uncertainties in streamflow estimates for corresponding AEP are depicted and quantified as two primary forms: quantile (aleatoric [random sampling] uncertainty) and distribution-choice (epistemic [model] uncertainty). Sampling uncertainties of a given distribution are relatively straightforward to compute from analytical or Monte Carlo-based approaches. Distribution-choice uncertainty stems from choices of potentially applicable probability distributions for which divergence among the choices increases as AEP decreases. Conventional goodness-of-fit statistics, such as Cramér-von Mises, and L-moment ratio diagrams are demonstrated in order to hone distribution choice. The results generally show that distribution choice uncertainty is larger than sampling uncertainty for very low AEP values.

  20. Uncertainty Due to Unsteady Fluid/Structure Interaction for the Ares I Vehicle Traversing the Transonic Regime

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2012-01-01

    Rapid reduced-order numerical models are being investigated as candidates to simulate the dynamics of a flexible launch vehicle during atmospheric ascent. There has also been the extension of these new approaches to include gust response. These methods are used to perform aeroelastic and gust response analyses at isolated Mach numbers. Such models require a method to time march through a succession of ascent Mach numbers. An approach is presented for interpolating reduced-order models of the unsteady aerodynamics at successive Mach numbers. The transonic Mach number range is considered here since launch vehicles can suffer the highest dynamic loads through this range. Realistic simulations of the flexible vehicle behavior as it traverses this Mach number range are presented. The response of the vehicle due to gusts is computed. Uncertainties in root mean square and maximum bending moment and crew module accelerations are presented due to assumed probability distributions in design parameters, ascent flight conditions, gusts. The primary focus is on the uncertainty introduced by modeling fidelity. It is found that an unsteady reduced order model produces larger excursions in the root mean square loading and accelerations than does a quasi-steady reduced order model.

  1. Detailed Uncertainty Analysis of the Ares I A106 Liftoff/Transition Database

    NASA Technical Reports Server (NTRS)

    Hanke, Jeremy L.

    2011-01-01

    The Ares I A106 Liftoff/Transition Force and Moment Aerodynamics Database describes the aerodynamics of the Ares I Crew Launch Vehicle (CLV) from the moment of liftoff through the transition from high to low total angles of attack at low subsonic Mach numbers. The database includes uncertainty estimates that were developed using a detailed uncertainty quantification procedure. The Ares I Aerodynamics Panel developed both the database and the uncertainties from wind tunnel test data acquired in the NASA Langley Research Center s 14- by 22-Foot Subsonic Wind Tunnel Test 591 using a 1.75 percent scale model of the Ares I and the tower assembly. The uncertainty modeling contains three primary uncertainty sources: experimental uncertainty, database modeling uncertainty, and database query interpolation uncertainty. The final database and uncertainty model represent a significant improvement in the quality of the aerodynamic predictions for this regime of flight over the estimates previously used by the Ares Project. The maximum possible aerodynamic force pushing the vehicle towards the launch tower assembly in a dispersed case using this database saw a 40 percent reduction from the worst-case scenario in previously released data for Ares I.

  2. The use of multiwavelets for uncertainty estimation in seismic surface wave dispersion.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poppeliers, Christian

    This report describes a new single-station analysis method to estimate the dispersion and uncer- tainty of seismic surface waves using the multiwavelet transform. Typically, when estimating the dispersion of a surface wave using only a single seismic station, the seismogram is decomposed into a series of narrow-band realizations using a bank of narrow-band filters. By then enveloping and normalizing the filtered seismograms and identifying the maximum power as a function of frequency, the group velocity can be estimated if the source-receiver distance is known. However, using the filter bank method, there is no robust way to estimate uncertainty. In thismore » report, I in- troduce a new method of estimating the group velocity that includes an estimate of uncertainty. The method is similar to the conventional filter bank method, but uses a class of functions, called Slepian wavelets, to compute a series of wavelet transforms of the data. Each wavelet transform is mathematically similar to a filter bank, however, the time-frequency tradeoff is optimized. By taking multiple wavelet transforms, I form a population of dispersion estimates from which stan- dard statistical methods can be used to estimate uncertainty. I demonstrate the utility of this new method by applying it to synthetic data as well as ambient-noise surface-wave cross-correlelograms recorded by the University of Nevada Seismic Network.« less

  3. Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peplow, Douglas E.; Blakeman, Edward D; Wagner, John C

    2007-01-01

    More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficientlymore » optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS).« less

  4. Will hydrologists learn from the world around them?: Empiricism, models, uncertainty and stationarity (Invited)

    NASA Astrophysics Data System (ADS)

    Lall, U.

    2010-12-01

    To honor the passing this year of eminent hydrologists, Dooge, Klemes and Shiklomanov, I offer an irreverent look at the issues of uncertainty and stationarity as the hydrologic industry prepares climate change products. In an AGU keynote, Dooge said that the principle of mass balance was the only hydrologic law. It was not clear how one should apply it. Klemes observed that Rippl’s 1872 mass curve analyses could essentially subsume many of the advances in stochastic modeling and reservoir optimization. Shiklomanov tackled data challenges to present a comprehensive view of the world’s water supply and demand highlighting the imbalance and sustainability challenge we face. He did not characterize the associated uncertainties. It is remarkable how little data can provide insights, while at times much information from models and data hihglights uncertainty. Hydrologists have focused on parameter uncertainties in hydrologic models. The indeterminacy of the typical situation offered Beven the opportunity to coin the term equifinality. However, this ignores the fact that the traditional continuum model fails us across scales if we don’t re-derive the correct averaged equations accounting for subscale heterogeneity. Nevertheless, the operating paradigm here has been a stimulus response model y = f(x,P), where y are the observations of the state variables, x are observations of hydrologic drivers, P are model parameters, and f(.,.) is an appropriate differential or integral transform. The uncertainty analyses then focuses on P, such that the resulting field of y is approximately unbiased and has minimum variance or maximum likelihood. The parameters P are usually time invariant, and x and/or f(.,.) are expected to account for changes in the boundary conditions. Thus the dynamics is stationary, while the time series of either x or y may not be. Given the lack of clarity as to whether the dynamical system or the trajectory is stationary it is amusing that the paper ”Stationarity is Dead” that implicitly uses changes in time series properties and boundary conditions as its basis gets much press. To avoid the stationarity dilemma, hydrologists are willing to take climate model outputs, rather than an analysis based on historical climate. Uncertainty analysis is viewed as the appropriate shrinkage of the spread across models and ensembles by clever averaging after bias corrections of the model output - a process I liken to transforming elephants into mice. Since it is someone else’s model, we abandon the seemingly good sense of seeking the best parameters P that reproduce the data y. We now seek to fit a model y = T{f1(x,P1),f2(x,P2)…}, where we don’t question the parameter or model but simply fudge the outputs to what was observed. Clearly, we can’t become climate modelers and must work with what we are dealt. By the way, doesn’t this uncertainty analysis and reduction process involve an assumption of stationarity? So, how should hydrologists navigate this muddle of uncertainty and stationarity? I offer some ideas tying to modeling purpose, and advocate a greater effort on diagnostic analyses that provide insights into how hydrologic dynamics co-evolve with climate at a variety of space and time scales. Are there natural bounds or structure to systemic uncertainty and predictability, and what are the key carriers of hydrologic information?

  5. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis-Hastings Markov Chain Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Hongrui; Wang, Cheng; Wang, Ying; Gao, Xiong; Yu, Chen

    2017-06-01

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLE confidence interval and thus more precise estimation by using the related information from regional gage stations. The Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.

  6. A robust indicator based on singular value decomposition for flaw feature detection from noisy ultrasonic signals

    NASA Astrophysics Data System (ADS)

    Cui, Ximing; Wang, Zhe; Kang, Yihua; Pu, Haiming; Deng, Zhiyang

    2018-05-01

    Singular value decomposition (SVD) has been proven to be an effective de-noising tool for flaw echo signal feature detection in ultrasonic non-destructive evaluation (NDE). However, the uncertainty in the arbitrary manner of the selection of an effective singular value weakens the robustness of this technique. Improper selection of effective singular values will lead to bad performance of SVD de-noising. What is more, the computational complexity of SVD is too large for it to be applied in real-time applications. In this paper, to eliminate the uncertainty in SVD de-noising, a novel flaw indicator, named the maximum singular value indicator (MSI), based on short-time SVD (STSVD), is proposed for flaw feature detection from a measured signal in ultrasonic NDE. In this technique, the measured signal is first truncated into overlapping short-time data segments to put feature information of a transient flaw echo signal in local field, and then the MSI can be obtained from the SVD of each short-time data segment. Research shows that this indicator can clearly indicate the location of ultrasonic flaw signals, and the computational complexity of this STSVD-based indicator is significantly reduced with the algorithm proposed in this paper. Both simulation and experiments show that this technique is very efficient for real-time application in flaw detection from noisy data.

  7. Assessing the Impact of Laurentide Ice-sheet Topography on Glacial Climate

    NASA Technical Reports Server (NTRS)

    Ullman, D. J.; LeGrande, A. N.; Carlson, A. E.; Anslow, F. S.; Licciardi, J. M.

    2014-01-01

    Simulations of past climates require altered boundary conditions to account for known shifts in the Earth system. For the Last Glacial Maximum (LGM) and subsequent deglaciation, the existence of large Northern Hemisphere ice sheets caused profound changes in surface topography and albedo. While ice-sheet extent is fairly well known, numerous conflicting reconstructions of ice-sheet topography suggest that precision in this boundary condition is lacking. Here we use a high-resolution and oxygen-isotopeenabled fully coupled global circulation model (GCM) (GISS ModelE2-R), along with two different reconstructions of the Laurentide Ice Sheet (LIS) that provide maximum and minimum estimates of LIS elevation, to assess the range of climate variability in response to uncertainty in this boundary condition.We present this comparison at two equilibrium time slices: the LGM, when differences in ice-sheet topography are maximized, and 14 ka, when differences in maximum ice-sheet height are smaller but still exist. Overall, we find significant differences in the climate response to LIS topography, with the larger LIS resulting in enhanced Atlantic Meridional Overturning Circulation and warmer surface air temperatures, particularly over northeastern Asia and the North Pacific. These up- and downstream effects are associated with differences in the development of planetary waves in the upper atmosphere, with the larger LIS resulting in a weaker trough over northeastern Asia that leads to the warmer temperatures and decreased albedo from snow and sea-ice cover. Differences between the 14 ka simulations are similar in spatial extent but smaller in magnitude, suggesting that climate is responding primarily to the larger difference in maximum LIS elevation in the LGM simulations. These results suggest that such uncertainty in ice-sheet boundary conditions alone may significantly impact the results of paleoclimate simulations and their ability to successfully simulate past climates, with implications for estimating climate sensitivity to greenhouse gas forcing utilizing past climate states.

  8. Maximum Acceleration Recording Circuit

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  9. On the Impact of a Quadratic Acceleration Term in the Analysis of Position Time Series

    NASA Astrophysics Data System (ADS)

    Bogusz, Janusz; Klos, Anna; Bos, Machiel Simon; Hunegnaw, Addisu; Teferle, Felix Norman

    2016-04-01

    The analysis of Global Navigation Satellite System (GNSS) position time series generally assumes that each of the coordinate component series is described by the sum of a linear rate (velocity) and various periodic terms. The residuals, the deviations between the fitted model and the observations, are then a measure of the epoch-to-epoch scatter and have been used for the analysis of the stochastic character (noise) of the time series. Often the parameters of interest in GNSS position time series are the velocities and their associated uncertainties, which have to be determined with the highest reliability. It is clear that not all GNSS position time series follow this simple linear behaviour. Therefore, we have added an acceleration term in the form of a quadratic polynomial function to the model in order to better describe the non-linear motion in the position time series. This non-linear motion could be a response to purely geophysical processes, for example, elastic rebound of the Earth's crust due to ice mass loss in Greenland, artefacts due to deficiencies in bias mitigation models, for example, of the GNSS satellite and receiver antenna phase centres, or any combination thereof. In this study we have simulated 20 time series with different stochastic characteristics such as white, flicker or random walk noise of length of 23 years. The noise amplitude was assumed at 1 mm/y-/4. Then, we added the deterministic part consisting of a linear trend of 20 mm/y (that represents the averaged horizontal velocity) and accelerations ranging from minus 0.6 to plus 0.6 mm/y2. For all these data we estimated the noise parameters with Maximum Likelihood Estimation (MLE) using the Hector software package without taken into account the non-linear term. In this way we set the benchmark to then investigate how the noise properties and velocity uncertainty may be affected by any un-modelled, non-linear term. The velocities and their uncertainties versus the accelerations for different types of noise are determined. Furthermore, we have selected 40 globally distributed stations that have a clear non-linear behaviour from two different International GNSS Service (IGS) analysis centers: JPL (Jet Propulsion Laboratory) and BLT (British Isles continuous GNSS Facility and University of Luxembourg Tide Gauge Benchmark Monitoring (TIGA) Analysis Center). We obtained maximum accelerations of -1.8±1.2 mm2/y and -4.5±3.3 mm2/y for the horizontal and vertical components, respectively. The noise analysis tests have shown that the addition of the non-linear term has significantly whitened the power spectra of the position time series, i.e. shifted the spectral index from flicker towards white noise.

  10. Comparison of depth-dose distributions of proton therapeutic beams calculated by means of logical detectors and ionization chamber modeled in Monte Carlo codes

    NASA Astrophysics Data System (ADS)

    Pietrzak, Robert; Konefał, Adam; Sokół, Maria; Orlef, Andrzej

    2016-08-01

    The success of proton therapy depends strongly on the precision of treatment planning. Dose distribution in biological tissue may be obtained from Monte Carlo simulations using various scientific codes making it possible to perform very accurate calculations. However, there are many factors affecting the accuracy of modeling. One of them is a structure of objects called bins registering a dose. In this work the influence of bin structure on the dose distributions was examined. The MCNPX code calculations of Bragg curve for the 60 MeV proton beam were done in two ways: using simple logical detectors being the volumes determined in water, and using a precise model of ionization chamber used in clinical dosimetry. The results of the simulations were verified experimentally in the water phantom with Marcus ionization chamber. The average local dose difference between the measured relative doses in the water phantom and those calculated by means of the logical detectors was 1.4% at first 25 mm, whereas in the full depth range this difference was 1.6% for the maximum uncertainty in the calculations less than 2.4% and for the maximum measuring error of 1%. In case of the relative doses calculated with the use of the ionization chamber model this average difference was somewhat greater, being 2.3% at depths up to 25 mm and 2.4% in the full range of depths for the maximum uncertainty in the calculations of 3%. In the dose calculations the ionization chamber model does not offer any additional advantages over the logical detectors. The results provided by both models are similar and in good agreement with the measurements, however, the logical detector approach is a more time-effective method.

  11. The wavefront of the radio signal emitted by cosmic ray air showers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apel, W.D.; Bekk, K.; Blümer, J.

    2014-09-01

    Analyzing measurements of the LOPES antenna array together with corresponding CoREAS simulations for more than 300 measured events with energy above 10{sup 17} eV and zenith angles smaller than 45{sup o}, we find that the radio wavefront of cosmic-ray air showers is of approximately hyperbolic shape. The simulations predict a slightly steeper wavefront towards East than towards West, but this asymmetry is negligible against the measurement uncertainties of LOPES. At axis distances ∼> 50 m, the wavefront can be approximated by a simple cone. According to the simulations, the cone angle is clearly correlated with the shower maximum. Thus, we confirmmore » earlier predictions that arrival time measurements can be used to study the longitudinal shower development, but now using a realistic wavefront. Moreover, we show that the hyperbolic wavefront is compatible with our measurement, and we present several experimental indications that the cone angle is indeed sensitive to the shower development. Consequently, the wavefront can be used to statistically study the primary composition of ultra-high energy cosmic rays. At LOPES, the experimentally achieved precision for the shower maximum is limited by measurement uncertainties to approximately 140 g/c {sup 2}. But the simulations indicate that under better conditions this method might yield an accuracy for the atmospheric depth of the shower maximum, X{sub max}, better than 30 g/c {sup 2}. This would be competitive with the established air-fluorescence and air-Cherenkov techniques, where the radio technique offers the advantage of a significantly higher duty-cycle. Finally, the hyperbolic wavefront can be used to reconstruct the shower geometry more accurately, which potentially allows a better reconstruction of all other shower parameters, too.« less

  12. The wavefront of the radio signal emitted by cosmic ray air showers

    NASA Astrophysics Data System (ADS)

    Apel, W. D.; Arteaga-Velázquez, J. C.; Bähren, L.; Bekk, K.; Bertaina, M.; Biermann, P. L.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Falcke, H.; Fuchs, B.; Gemmeke, H.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Horneffer, A.; Huber, D.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Krömer, O.; Kuijpers, J.; Link, K.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Melissas, M.; Morello, C.; Oehlschläger, J.; Palmieri, N.; Pierog, T.; Rautenberg, J.; Rebel, H.; Roth, M.; Rühle, C.; Saftoiu, A.; Schieler, H.; Schmidt, A.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Weindl, A.; Wochele, J.; Zabierowski, J.; Zensus, J. A.

    2014-09-01

    Analyzing measurements of the LOPES antenna array together with corresponding CoREAS simulations for more than 300 measured events with energy above 1017 eV and zenith angles smaller than 45o, we find that the radio wavefront of cosmic-ray air showers is of approximately hyperbolic shape. The simulations predict a slightly steeper wavefront towards East than towards West, but this asymmetry is negligible against the measurement uncertainties of LOPES. At axis distances gtrsim 50 m, the wavefront can be approximated by a simple cone. According to the simulations, the cone angle is clearly correlated with the shower maximum. Thus, we confirm earlier predictions that arrival time measurements can be used to study the longitudinal shower development, but now using a realistic wavefront. Moreover, we show that the hyperbolic wavefront is compatible with our measurement, and we present several experimental indications that the cone angle is indeed sensitive to the shower development. Consequently, the wavefront can be used to statistically study the primary composition of ultra-high energy cosmic rays. At LOPES, the experimentally achieved precision for the shower maximum is limited by measurement uncertainties to approximately 140 g/c 2. But the simulations indicate that under better conditions this method might yield an accuracy for the atmospheric depth of the shower maximum, Xmax, better than 30 g/c 2. This would be competitive with the established air-fluorescence and air-Cherenkov techniques, where the radio technique offers the advantage of a significantly higher duty-cycle. Finally, the hyperbolic wavefront can be used to reconstruct the shower geometry more accurately, which potentially allows a better reconstruction of all other shower parameters, too.

  13. Benchmarking hydrological model predictive capability for UK River flows and flood peaks.

    NASA Astrophysics Data System (ADS)

    Lane, Rosanna; Coxon, Gemma; Freer, Jim; Wagener, Thorsten

    2017-04-01

    Data and hydrological models are now available for national hydrological analyses. However, hydrological model performance varies between catchments, and lumped, conceptual models are not able to produce adequate simulations everywhere. This study aims to benchmark hydrological model performance for catchments across the United Kingdom within an uncertainty analysis framework. We have applied four hydrological models from the FUSE framework to 1128 catchments across the UK. These models are all lumped models and run at a daily timestep, but differ in the model structural architecture and process parameterisations, therefore producing different but equally plausible simulations. We apply FUSE over a 20 year period from 1988-2008, within a GLUE Monte Carlo uncertainty analyses framework. Model performance was evaluated for each catchment, model structure and parameter set using standard performance metrics. These were calculated both for the whole time series and to assess seasonal differences in model performance. The GLUE uncertainty analysis framework was then applied to produce simulated 5th and 95th percentile uncertainty bounds for the daily flow time-series and additionally the annual maximum prediction bounds for each catchment. The results show that the model performance varies significantly in space and time depending on catchment characteristics including climate, geology and human impact. We identify regions where models are systematically failing to produce good results, and present reasons why this could be the case. We also identify regions or catchment characteristics where one model performs better than others, and have explored what structural component or parameterisation enables certain models to produce better simulations in these catchments. Model predictive capability was assessed for each catchment, through looking at the ability of the models to produce discharge prediction bounds which successfully bound the observed discharge. These results improve our understanding of the predictive capability of simple conceptual hydrological models across the UK and help us to identify where further effort is needed to develop modelling approaches to better represent different catchment and climate typologies.

  14. Quantifying hydrological responses of small Mediterranean catchments under climate change projections.

    PubMed

    Sellami, Haykel; Benabdallah, Sihem; La Jeunesse, Isabelle; Vanclooster, Marnik

    2016-02-01

    Catchment flow regimes alteration is likely to be a prominent consequence of climate change projections in the Mediterranean. Here we explore the potential effects of climatic change on the flow regime of the Thau and the Chiba catchments which are located in Southern France and Northeastern Tunisia, respectively. The Soil and Water Assessment Tool (SWAT) hydrological model is forced with projections from an ensemble of 4 climate model (CM) to assess changes and uncertainty in relevant hydrological indicators related to water balance, magnitude, frequency and timing of the flow between a reference (1971-2000) and future (2041-2071) periods. Results indicate that both catchments are likely to experience a decrease in precipitation and increase in temperature in the future. Consequently, runoff and soil water content are projected to decrease whereas potential evapotranspiration is likely to increase in both catchments. Yet uncertain, the projected magnitudes of these changes are higher in the wet period than in the dry period. Analyses of extreme flow show similar trend in both catchments, projecting a decrease in both high flow and low flow magnitudes for various time durations. Further, significant increase in low flow frequency as a proxy for hydrological droughts is projected for both catchments but with higher uncertainty in the wet period than in the dry period. Although no changes in the average timing of maximum and minimum flow events for different flow durations are projected, substantial uncertainty remains in the hydrological projections. While the results in both catchments show consistent trend of change for most of the hydrologic indicators, the overall degree of alteration on the flow regime of the Chiba catchment is projected to be higher than that of the Thau catchment. The projected magnitudes of alteration as well as their associated uncertainty vary depending on the catchment characteristics and flow seasonality. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Geodetic imaging of tectonic deformation with InSAR

    NASA Astrophysics Data System (ADS)

    Fattahi, Heresh

    Precise measurements of ground deformation across the plate boundaries are crucial observations to evaluate the location of strain localization and to understand the pattern of strain accumulation at depth. Such information can be used to evaluate the possible location and magnitude of future earthquakes. Interferometric Synthetic Aperture Radar (InSAR) potentially can deliver small-scale (few mm/yr) ground displacement over long distances (hundreds of kilometers) across the plate boundaries and over continents. However, Given the ground displacement as our signal of interest, the InSAR observations of ground deformation are usually affected by several sources of systematic and random noises. In this dissertation I identify several sources of systematic and random noise, develop new methods to model and mitigate the systematic noise and to evaluate the uncertainty of the ground displacement measured with InSAR. I use the developed approach to characterize the tectonic deformation and evaluate the rate of strain accumulation along the Chaman fault system, the western boundary of the India with Eurasia tectonic plates. I evaluate the bias due to the topographic residuals in the InSAR range-change time-series and develope a new method to estimate the topographic residuals and mitigate the effect from the InSAR range-change time-series (Chapter 2). I develop a new method to evaluate the uncertainty of the InSAR velocity field due to the uncertainty of the satellite orbits (Chapter 3) and a new algorithm to automatically detect and correct the phase unwrapping errors in a dense network of interferograms (Chapter 4). I develop a new approach to evaluate the impact of systematic and stochastic components of the tropospheric delay on the InSAR displacement time-series and its uncertainty (Chapter 5). Using the new InSAR time-series approach developed in the previous chapters, I study the tectonic deformation across the western boundary of the India plate with Eurasia and evaluated the rate of strain accumulation along the Chaman fault system (Chapter 5). I also evaluate the co-seismic and post-seismic displacement of a moderate M5.5 earthquake on the Ghazaband fault (Chapter 6). The developed methods to mitigate the systematic noise from InSAR time-series, significantly improve the accuracy of the InSAR displacement time-series and velocity. The approaches to evaluate the effect of the stochastic components of noise in InSAR displacement time-series enable us to obtain the variance-covariance matrix of the InSAR displacement time-series and to express their uncertainties. The effect of the topographic residuals in the InSAR range-change time-series is proportional to the perpendicular baseline history of the set of SAR acquisitions. The proposed method for topographic residual correction, efficiently corrects the displacement time-series. Evaluation of the uncertainty of velocity due to the orbital errors shows that for modern SAR satellites with precise orbits such as TerraSAR-X and Sentinel-1, the uncertainty of 0.2 mm/yr per 100 km and for older satellites with less accurate orbits such as ERS and Envisat, the uncertainty of 1.5 and 0.5mm/yr per 100 km, respectively are achievable. However, the uncertainty due to the orbital errors depends on the orbital uncertainties, the number and time span of SAR acquisitions. Contribution of the tropospheric delay to the InSAR range-change time-series can be subdivided to systematic (seasonal delay) and stochastic components. The systematic component biases the displacement times-series and velocity field as a function of the acquisition time and the non-seasonal component significantly contributes to the InSAR uncertainty. Both components are spatially correlated and therefore the covariance of noise between pixels should be considered for evaluating the uncertainty due to the random tropospheric delay. The relative velocity uncertainty due to the random tropospheric delay depends on the scatter of the random tropospheric delay, and is inversely proportional to the number of acquisitions, and the total time span covered by the SAR acquisitions. InSAR observations across the Chaman fault system shows that relative motion between India and Eurasia in the western boundary is distributed among different faults. The InSAR velocity field indicates strain localization on the Chaman fault and Ghazaband fault with slip rates of ~8 and ~16 mm/yr, respectively. High rate of strain accumulation on the Ghazaband fault and lack of evidence for rupturing the fault during the 1935 Quetta earthquake indicates that enough strain has been accumulated for large (M>7) earthquake, which threatens Balochistan and the City of Quetta. Chaman fault from latitudes ~29.5 N to ~32.5 N is creeping with a maximum surface creep rate of 8 mm/yr, which indicates that Chaman fault is only partially locked and therefore moderate earthquakes (M<7) similar to what has been recorded in last 100 years are expected.

  16. Evaluating the uncertainty of input quantities in measurement models

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio; Elster, Clemens

    2014-06-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM. While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia). Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in uncertainty propagation exercises. In this we deviate markedly and emphatically from the GUM Supplement 1, which gives pride of place to the Principle of Maximum Entropy as a means to assign probability distributions to input quantities.

  17. A Bayesian Framework of Uncertainties Integration in 3D Geological Model

    NASA Astrophysics Data System (ADS)

    Liang, D.; Liu, X.

    2017-12-01

    3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.

  18. LensEnt2: Maximum-entropy weak lens reconstruction

    NASA Astrophysics Data System (ADS)

    Marshall, P. J.; Hobson, M. P.; Gull, S. F.; Bridle, S. L.

    2013-08-01

    LensEnt2 is a maximum entropy reconstructor of weak lensing mass maps. The method takes each galaxy shape as an independent estimator of the reduced shear field and incorporates an intrinsic smoothness, determined by Bayesian methods, into the reconstruction. The uncertainties from both the intrinsic distribution of galaxy shapes and galaxy shape estimation are carried through to the final mass reconstruction, and the mass within arbitrarily shaped apertures are calculated with corresponding uncertainties. The input is a galaxy ellipticity catalog with each measured galaxy shape treated as a noisy tracer of the reduced shear field, which is inferred on a fine pixel grid assuming positivity, and smoothness on scales of w arcsec where w is an input parameter. The ICF width w can be chosen by computing the evidence for it.

  19. Apparatus for accurate density measurements of fluids based on a magnetic suspension balance

    NASA Astrophysics Data System (ADS)

    Gong, Maoqiong; Li, Huiya; Guo, Hao; Dong, Xueqiang; Wu, J. F.

    2012-06-01

    A new apparatus for accurate pressure, density and temperature (p, ρ, T) measurements over wide ranges of (p, ρ, T) (90 K to 290 K; 0 MPa to 3 MPa; 0 kg/m3 to 2000 kg/m3) is described. This apparatus is based on a magnetic suspension balance which applies the Archimedes' buoyancy principle. In order to verify the new apparatus, comprehensive (p, ρ, T) measurements on pure nitrogen were carried out. The maximum relative standard uncertainty is 0.09% in density. The maximum standard uncertainty in temperature is 5 mK, and that in pressure is 250 Pa for 1.5 MPa and 390 Pa for 3MPa full scale range respectively. The experimental data were compared with selected literature data and good agreements were found.

  20. How Well Does Fracture Set Characterization Reduce Uncertainty in Capture Zone Size for Wells Situated in Sedimentary Bedrock Aquifers?

    NASA Astrophysics Data System (ADS)

    West, A. C.; Novakowski, K. S.

    2005-12-01

    Regional groundwater flow models are rife with uncertainty. The three-dimensional flux vector fields must generally be inferred using inverse modelling from sparse measurements of hydraulic head, from measurements of hydraulic parameters at a scale that is miniscule in comparison to that of the domain, and from none to a very few measurements of recharge or discharge rate. Despite the inherent uncertainty in these models they are routinely used to delineate steady-state or time-of-travel capture zones for the purpose of wellhead protection. The latter are defined as the volume of the aquifer within which released particles will arrive at the well within the specified time and their delineation requires the additional step of dividing the magnitudes of the flux vectors by the assumed porosity to arrive at the ``average linear groundwater velocity'' vector field. Since the porosity is usually assumed constant over the domain one could be forgiven for thinking that the uncertainty introduced at this step is minor in comparison to the flow model calibration step. We consider this question when the porosity in question is fracture porosity in flat-lying sedimentary bedrock. We also consider whether or not the diffusive uptake of solute into the rock matrix which lies between the source and the production well reduces or enhances the uncertainty. To evaluate the uncertainty an aquifer cross section is conceptualized as an array of horizontal, randomly-spaced, parallel-plate fractures of random aperture, with adjacent horizontal fractures connected by vertical fractures again of random spacing and aperture. The source is assumed to be a continuous concentration (i.e. a dirichlet boundary condition) representing a leaking tank or a DNAPL pool, and the receptor is a fully pentrating well located in the down-gradient direction. In this context the time-of-travel capture zone is defined as the separation distance required such that the source does not contaminate the well beyond a threshold concentration within the specified time. Aquifers are simulated by drawing the random spacings and apertures from specified distributions. Predictions are made of capture zone size assuming various degrees of knowledge of these distributions, with the parameters of the horizontal fractures being estimated using simulated hydraulic tests and a maximum likelihood estimator. The uncertainty is evaluated by calculating the variance in the capture zone size estimated in multiple realizations. The results show that despite good strategies to estimate the parameters of the horizontal fractures the uncertainty in capture zone size is enormous, mostly due to the lack of available information on vertical fractures. Also, at realistic distances (less than ten kilometers) and using realistic transmissivity distributions for the horizontal fractures the uptake of solute from fractures into matrix cannot be relied upon to protect the production well from contamination.

  1. Projected changes, climate change signal, and uncertainties in the CMIP5-based projections of ocean surface wave heights

    NASA Astrophysics Data System (ADS)

    Wang, Xiaolan; Feng, Yang; Swail, Val R.

    2016-04-01

    Ocean surface waves can be major hazards in coastal and offshore activities. However, wave observations are available only at limited locations and cover only the recent few decades. Also, there exists very limited information on ocean wave behavior in response to climate change, because such information is not simulated in current global climate models. In a recent study, we used a multivariate regression model with lagged dependent variable to make statistical global projections of changes in significant wave heights (Hs) using mean sea level pressure (SLP) information from 20 CMIP5 climate models for the twenty-first century. The statistical model was calibrated and validated using the ERA-Interim reanalysis of Hs and SLP for the period 1981-2010. The results show Hs increases in the tropics (especially in the eastern tropical Pacific) and in southern hemisphere high-latitudes. Under the projected 2070-2099 climate condition of the RCP8.5 scenario, the occurrence frequency of the present-day one-in-10-year extreme wave heights is likely to double or triple in several coastal regions around the world (e.g., the Chilean coast, Gulf of Oman, Gulf of Bengal, Gulf of Mexico). More recently, we used the analysis of variance approaches to quantify the climate change signal and uncertainty in multi-model ensembles of statistical Hs simulations globally, which are based on the CMIP5 historical, RCP4.5 and RCP8.5 forcing scenario simulations of SLP. In a 4-model 3-run ensemble, the 4-model common signal of climate change is found to strengthen over time, as would be expected. For the historical followed by RCP8.5 scenario, the common signal in annual mean Hs is found to be significant over 16.6%, 55.0% and 82.2% of the area by year 2005, 2050 and 2099, respectively. For the annual maximum, the signal is much weaker. The signal is strongest in the eastern tropical Pacific, featuring significant increases in both the annual mean and maximum of Hs in this region. The climate model uncertainty (i.e., inter-model variability) is significant over 99.9% of the area; its magnitude is comparable to or greater than the climate change signal by 2099 over most areas, except in the eastern tropical Pacific where the signal is much larger. In a 20-model 2-scenario single-run ensemble of statistical Hs simulations for the period 2006-2099, the model uncertainty is found to be significant globally; it is about 10 times as large as the scenario uncertainty between RCP4.5 and RCP8.5 scenarios.

  2. Acquisition of decision making criteria: reward rate ultimately beats accuracy.

    PubMed

    Balci, Fuat; Simen, Patrick; Niyogi, Ritwik; Saxe, Andrew; Hughes, Jessica A; Holmes, Philip; Cohen, Jonathan D

    2011-02-01

    Speed-accuracy trade-offs strongly influence the rate of reward that can be earned in many decision-making tasks. Previous reports suggest that human participants often adopt suboptimal speed-accuracy trade-offs in single session, two-alternative forced-choice tasks. We investigated whether humans acquired optimal speed-accuracy trade-offs when extensively trained with multiple signal qualities. When performance was characterized in terms of decision time and accuracy, our participants eventually performed nearly optimally in the case of higher signal qualities. Rather than adopting decision criteria that were individually optimal for each signal quality, participants adopted a single threshold that was nearly optimal for most signal qualities. However, setting a single threshold for different coherence conditions resulted in only negligible decrements in the maximum possible reward rate. Finally, we tested two hypotheses regarding the possible sources of suboptimal performance: (1) favoring accuracy over reward rate and (2) misestimating the reward rate due to timing uncertainty. Our findings provide support for both hypotheses, but also for the hypothesis that participants can learn to approach optimality. We find specifically that an accuracy bias dominates early performance, but diminishes greatly with practice. The residual discrepancy between optimal and observed performance can be explained by an adaptive response to uncertainty in time estimation.

  3. Uncertainty, robustness, and the value of information in managing an expanding Arctic goose population

    USGS Publications Warehouse

    Johnson, Fred A.; Jensen, Gitte H.; Madsen, Jesper; Williams, Byron K.

    2014-01-01

    We explored the application of dynamic-optimization methods to the problem of pink-footed goose (Anser brachyrhynchus) management in western Europe. We were especially concerned with the extent to which uncertainty in population dynamics influenced an optimal management strategy, the gain in management performance that could be expected if uncertainty could be eliminated or reduced, and whether an adaptive or robust management strategy might be most appropriate in the face of uncertainty. We combined three alternative survival models with three alternative reproductive models to form a set of nine annual-cycle models for pink-footed geese. These models represent a wide range of possibilities concerning the extent to which demographic rates are density dependent or independent, and the extent to which they are influenced by spring temperatures. We calculated state-dependent harvest strategies for these models using stochastic dynamic programming and an objective function that maximized sustainable harvest, subject to a constraint on desired population size. As expected, attaining the largest mean objective value (i.e., the relative measure of management performance) depended on the ability to match a model-dependent optimal strategy with its generating model of population dynamics. The nine models suggested widely varying objective values regardless of the harvest strategy, with the density-independent models generally producing higher objective values than models with density-dependent survival. In the face of uncertainty as to which of the nine models is most appropriate, the optimal strategy assuming that both survival and reproduction were a function of goose abundance and spring temperatures maximized the expected minimum objective value (i.e., maxi–min). In contrast, the optimal strategy assuming equal model weights minimized the expected maximum loss in objective value. The expected value of eliminating model uncertainty was an increase in objective value of only 3.0%. This value represents the difference between the best that could be expected if the most appropriate model were known and the best that could be expected in the face of model uncertainty. The value of eliminating uncertainty about the survival process was substantially higher than that associated with the reproductive process, which is consistent with evidence that variation in survival is more important than variation in reproduction in relatively long-lived avian species. Comparing the expected objective value if the most appropriate model were known with that of the maxi–min robust strategy, we found the value of eliminating uncertainty to be an expected increase of 6.2% in objective value. This result underscores the conservatism of the maxi–min rule and suggests that risk-neutral managers would prefer the optimal strategy that maximizes expected value, which is also the strategy that is expected to minimize the maximum loss (i.e., a strategy based on equal model weights). The low value of information calculated for pink-footed geese suggests that a robust strategy (i.e., one in which no learning is anticipated) could be as nearly effective as an adaptive one (i.e., a strategy in which the relative credibility of models is assessed through time). Of course, an alternative explanation for the low value of information is that the set of population models we considered was too narrow to represent key uncertainties in population dynamics. Yet we know that questions about the presence of density dependence must be central to the development of a sustainable harvest strategy. And while there are potentially many environmental covariates that could help explain variation in survival or reproduction, our admission of models in which vital rates are drawn randomly from reasonable distributions represents a worst-case scenario for management. We suspect that much of the value of the various harvest strategies we calculated is derived from the fact that they are state dependent, such that appropriate harvest rates depend on population abundance and weather conditions, as well as our focus on an infinite time horizon for sustainability.

  4. The threatened self: Considerations of time, place, and uncertainty in advanced illness.

    PubMed

    Nanton, Veronica; Munday, Dan; Dale, Jeremy; Mason, Bruce; Kendall, Marilyn; Murray, Scott

    2016-05-01

    Loss of self and the transition to patient-hood have been widely discussed in relation to the experience of advanced illness. Individuals however often maintain identities or selves beyond those demanded by the circumstances of being a patient. This study explores the presentation of this personal identity and interactions between intrinsic and extrinsic elements that support or threaten its maintenance. In particular, this study examined the impact of uncertainty on the representations of self and the part played by the patient's health care professionals and the systems in which they are embedded, in limiting or reinforcing its effects. Complementary methods of ethnographic observation and serial narrative interviews were adopted to explore both the lo"cal social and health care context and the changing presentation of self by patients with advanced multimorbidity, chronic illness, and cancer. In total, 36 interviews were undertaken with 16 patients. Analysis was guided by concepts of time and place, combining contextual data with the unfolding patient narrative. Good pain and symptom control was a necessary, but not sufficient, condition for the maintenance of a personal identity. Essential agentic elements included knowledge of appropriate and immediate sources of help. Also important were a sense of control achieved through a shared understanding with health care professionals of the condition and active management of uncertainty. In addition, the maintenance of self depended on keeping a connection with aspects of life associated with a pre-illness identity. Critically, this self was contingent on external recognition, acknowledgement, and validation. Professional relationships that focus solely on the 'person as patient' may be insufficient for patients' needs. Health care professionals should seek to recognize and acknowledge the personal identity that may be critical to their sense of self-worth. Through an ongoing relationship guiding the patient through the uncertainties they face, health care professionals may play an essential role in sustaining the 'patient as person'. What is already known on this subject? Loss of self or personal identity occurs in a range of serious conditions. The sick self is incorporated in a process of identity reconstruction. Uncertainty is an inherent aspect of serious and advanced illness. Unmanaged uncertainty results in a range of negative psychological consequences that contribute to the loss of personal identity. Information and communication with health care professionals help patients manage uncertainty. What does this study add? Sufferers may retain a personal identity continuous with a pre-illness self using internal and external resources. The pre-illness self may be subsumed by the patient self especially at times of transition and maximum uncertainty. Acknowledgement and facilitation by health care professionals can enable the preservation of the pre-illness self. © 2015 The British Psychological Society.

  5. The application of Global Sensitivity Analysis to quantify the dominant input factors for hydraulic model simulations

    NASA Astrophysics Data System (ADS)

    Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2015-04-01

    Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum inundation indicators and flood wave travel time in addition to temporally and spatially variable indicators. This enables us to assess whether the sensitivity of the model to various input factors is stationary in both time and space. Furthermore, competing models are assessed against observations of water depths from a historical flood event. Consequently we are able to determine which of the input factors has the most influence on model performance. Initial findings suggest the sensitivity of the model to different input factors varies depending on the type of model output assessed and at what stage during the flood hydrograph the model output is assessed. We have also found that initial decisions regarding the characterisation of the input factors, for example defining the upper and lower bounds of the parameter sample space, can be significant in influencing the implied sensitivities.

  6. Two-agent cooperative search using game models with endurance-time constraints

    NASA Astrophysics Data System (ADS)

    Sujit, P. B.; Ghose, Debasish

    2010-07-01

    In this article, the problem of two Unmanned Aerial Vehicles (UAVs) cooperatively searching an unknown region is addressed. The search region is discretized into hexagonal cells and each cell is assumed to possess an uncertainty value. The UAVs have to cooperatively search these cells taking limited endurance, sensor and communication range constraints into account. Due to limited endurance, the UAVs need to return to the base station for refuelling and also need to select a base station when multiple base stations are present. This article proposes a route planning algorithm that takes endurance time constraints into account and uses game theoretical strategies to reduce the uncertainty. The route planning algorithm selects only those cells that ensure the agent will return to any one of the available bases. A set of paths are formed using these cells which the game theoretical strategies use to select a path that yields maximum uncertainty reduction. We explore non-cooperative Nash, cooperative and security strategies from game theory to enhance the search effectiveness. Monte-Carlo simulations are carried out which show the superiority of the game theoretical strategies over greedy strategy for different look ahead step length paths. Within the game theoretical strategies, non-cooperative Nash and cooperative strategy perform similarly in an ideal case, but Nash strategy performs better than the cooperative strategy when the perceived information is different. We also propose a heuristic based on partitioning of the search space into sectors to reduce computational overhead without performance degradation.

  7. Robust fractional order sliding mode control of doubly-fed induction generator (DFIG)-based wind turbines.

    PubMed

    Ebrahimkhani, Sadegh

    2016-07-01

    Wind power plants have nonlinear dynamics and contain many uncertainties such as unknown nonlinear disturbances and parameter uncertainties. Thus, it is a difficult task to design a robust reliable controller for this system. This paper proposes a novel robust fractional-order sliding mode (FOSM) controller for maximum power point tracking (MPPT) control of doubly fed induction generator (DFIG)-based wind energy conversion system. In order to enhance the robustness of the control system, uncertainties and disturbances are estimated using a fractional order uncertainty estimator. In the proposed method a continuous control strategy is developed to achieve the chattering free fractional order sliding-mode control, and also no knowledge of the uncertainties and disturbances or their bound is assumed. The boundedness and convergence properties of the closed-loop signals are proven using Lyapunov׳s stability theory. Simulation results in the presence of various uncertainties were carried out to evaluate the effectiveness and robustness of the proposed control scheme. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Uncertainty in flood damage estimates and its potential effect on investment decisions

    NASA Astrophysics Data System (ADS)

    Wagenaar, D. J.; de Bruijn, K. M.; Bouwer, L. M.; de Moel, H.

    2016-01-01

    This paper addresses the large differences that are found between damage estimates of different flood damage models. It explains how implicit assumptions in flood damage functions and maximum damages can have large effects on flood damage estimates. This explanation is then used to quantify the uncertainty in the damage estimates with a Monte Carlo analysis. The Monte Carlo analysis uses a damage function library with 272 functions from seven different flood damage models. The paper shows that the resulting uncertainties in estimated damages are in the order of magnitude of a factor of 2 to 5. The uncertainty is typically larger for flood events with small water depths and for smaller flood events. The implications of the uncertainty in damage estimates for flood risk management are illustrated by a case study in which the economic optimal investment strategy for a dike segment in the Netherlands is determined. The case study shows that the uncertainty in flood damage estimates can lead to significant over- or under-investments.

  9. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis–Hastings Markov Chain Monte Carlo algorithm

    DOE PAGES

    Wang, Hongrui; Wang, Cheng; Wang, Ying; ...

    2017-04-05

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLEmore » confidence interval and thus more precise estimation by using the related information from regional gage stations. As a result, the Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.« less

  10. Mesoscale modeling of smoke transport from equatorial Southeast Asian Maritime Continent to the Philippines: First comparison of ensemble analysis with in situ observations

    NASA Astrophysics Data System (ADS)

    Ge, Cui; Wang, Jun; Reid, Jeffrey S.; Posselt, Derek J.; Xian, Peng; Hyer, Edward

    2017-05-01

    Atmospheric transport of smoke from equatorial Southeast Asian Maritime Continent (Indonesia, Singapore, and Malaysia) to the Philippines was recently verified by the first-ever measurement of aerosol composition in the region of the Sulu Sea from a research vessel named Vasco. However, numerical modeling of such transport can have large uncertainties due to the lack of observations for parameterization schemes and for describing fire emission and meteorology in this region. These uncertainties are analyzed here, for the first time, with an ensemble of 24 Weather Research and Forecasting model with Chemistry (WRF-Chem) simulations. The ensemble reproduces the time series of observed surface nonsea-salt PM2.5 concentrations observed from the Vasco vessel during 17-30 September 2011 and overall agrees with satellite (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) and Moderate Resolution Imaging Spectroradiometer (MODIS)) and Aerosol Robotic Network (AERONET) data. The difference of meteorology between National Centers for Environmental Prediction (NCEP's) Final (FNL) and European Center for Medium range Weather Forecasting (ECMWF's) ERA renders the biggest spread in the ensemble (up to 20 μg m-3 or 200% in surface PM2.5), with FNL showing systematically superior results. The second biggest uncertainty is from fire emissions; the 2 day maximum Fire Locating and Modelling of Burning Emissions (FLAMBE) emission is superior than the instantaneous one. While Grell-Devenyi (G3) and Betts-Miller-Janjić cumulus schemes only produce a difference of 3 μg m-3 of surface PM2.5 over the Sulu Sea, the ensemble mean agrees best with Climate Prediction Center (CPC) MORPHing (CMORPH)'s spatial distribution of precipitation. Simulation with FNL-G3, 2 day maximum FLAMBE, and 800 m injection height outperforms other ensemble members. Finally, the global transport model (Navy Aerosol Analysis and Prediction System (NAAPS)) outperforms all WRF-Chem simulations in describing smoke transport on 20 September 2011, suggesting the challenges to model tropical meteorology at mesoscale and finer scale.

  11. Maximum magnitude earthquakes induced by fluid injection

    USGS Publications Warehouse

    McGarr, Arthur F.

    2014-01-01

    Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.

  12. Reproducibility of summertime diurnal precipitation over northern Eurasia simulated by CMIP5 climate models

    NASA Astrophysics Data System (ADS)

    Hirota, N.; Takayabu, Y. N.

    2015-12-01

    Reproducibility of diurnal precipitation over northern Eurasia simulated by CMIP5 climate models in their historical runs were evaluated, in comparison with station data (NCDC-9813) and satellite data (GSMaP-V5). We first calculated diurnal cycles by averaging precipitation at each local solar time (LST) in June-July-August during 1981-2000 over the continent of northern Eurasia (0-180E, 45-90N). Then we examined occurrence time of maximum precipitation and a contribution of diurnally varying precipitation to the total precipitation.The contribution of diurnal precipitation was about 21% in both NCDC-9813 and GSMaP-V5. The maximum precipitation occurred at 18LST in NCDC-9813 but 16LST in GSMaP-V5, indicating some uncertainties even in the observational datasets. The diurnal contribution of the CMIP5 models varied largely from 11% to 62%, and their timing of the precipitation maximum ranged from 11LST to 20LST. Interestingly, the contribution and the timing had strong negative correlation of -0.65. The models with larger diurnal precipitation showed precipitation maximum earlier around noon. Next, we compared sensitivity of precipitation to surface temperature and tropospheric humidity between 5 models with large diurnal precipitation (LDMs) and 5 models with small diurnal precipitation (SDMs). Precipitation in LDMs showed high sensitivity to surface temperature, indicating its close relationship with local instability. On the other hand, synoptic disturbances were more active in SDMs with a dominant role of the large scale condensation, and precipitation in SDMs was more related with tropospheric moisture. Therefore, the relative importance of the local instability and the synoptic disturbances was suggested to be an important factor in determining the contribution and timing of the diurnal precipitation. Acknowledgment: This study is supported by Green Network of Excellence (GRENE) Program by the Ministry of Education, Culture, Sports, Science and Technology, Japan, and Environment Research and Technology Development Fund (2-1503) of the Ministry of the Environment, Japan.

  13. Surface-layer turbulence, energy balance and links to atmospheric circulations over a mountain glacier in the French Alps

    NASA Astrophysics Data System (ADS)

    Litt, Maxime; Sicart, Jean-Emmanuel; Six, Delphine; Wagnon, Patrick; Helgason, Warren D.

    2017-04-01

    Over Saint-Sorlin Glacier in the French Alps (45° N, 6.1° E; ˜ 3 km2) in summer, we study the atmospheric surface-layer dynamics, turbulent fluxes, their uncertainties and their impact on surface energy balance (SEB) melt estimates. Results are classified with regard to large-scale forcing. We use high-frequency eddy-covariance data and mean air-temperature and wind-speed vertical profiles, collected in 2006 and 2009 in the glacier's atmospheric surface layer. We evaluate the turbulent fluxes with the eddy-covariance (sonic) and the profile method, and random errors and parametric uncertainties are evaluated by including different stability corrections and assuming different values for surface roughness lengths. For weak synoptic forcing, local thermal effects dominate the wind circulation. On the glacier, weak katabatic flows with a wind-speed maximum at low height (2-3 m) are detected 71 % of the time and are generally associated with small turbulent kinetic energy (TKE) and small net turbulent fluxes. Radiative fluxes dominate the SEB. When the large-scale forcing is strong, the wind in the valley aligns with the glacier flow, intense downslope flows are observed, no wind-speed maximum is visible below 5 m, and TKE and net turbulent fluxes are often intense. The net turbulent fluxes contribute significantly to the SEB. The surface-layer turbulence production is probably not at equilibrium with dissipation because of interactions of large-scale orographic disturbances with the flow when the forcing is strong or low-frequency oscillations of the katabatic flow when the forcing is weak. In weak forcing when TKE is low, all turbulent fluxes calculation methods provide similar fluxes. In strong forcing when TKE is large, the choice of roughness lengths impacts strongly the net turbulent fluxes from the profile method fluxes and their uncertainties. However, the uncertainty on the total SEB remains too high with regard to the net observed melt to be able to recommend one turbulent flux calculation method over another.

  14. Robust evaluation of time series classification algorithms for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Harvey, Dustin Y.; Worden, Keith; Todd, Michael D.

    2014-03-01

    Structural health monitoring (SHM) systems provide real-time damage and performance information for civil, aerospace, and mechanical infrastructure through analysis of structural response measurements. The supervised learning methodology for data-driven SHM involves computation of low-dimensional, damage-sensitive features from raw measurement data that are then used in conjunction with machine learning algorithms to detect, classify, and quantify damage states. However, these systems often suffer from performance degradation in real-world applications due to varying operational and environmental conditions. Probabilistic approaches to robust SHM system design suffer from incomplete knowledge of all conditions a system will experience over its lifetime. Info-gap decision theory enables nonprobabilistic evaluation of the robustness of competing models and systems in a variety of decision making applications. Previous work employed info-gap models to handle feature uncertainty when selecting various components of a supervised learning system, namely features from a pre-selected family and classifiers. In this work, the info-gap framework is extended to robust feature design and classifier selection for general time series classification through an efficient, interval arithmetic implementation of an info-gap data model. Experimental results are presented for a damage type classification problem on a ball bearing in a rotating machine. The info-gap framework in conjunction with an evolutionary feature design system allows for fully automated design of a time series classifier to meet performance requirements under maximum allowable uncertainty.

  15. Uncertainty Quantification and Assessment of CO2 Leakage in Groundwater Aquifers

    NASA Astrophysics Data System (ADS)

    Carroll, S.; Mansoor, K.; Sun, Y.; Jones, E.

    2011-12-01

    Complexity of subsurface aquifers and the geochemical reactions that control drinking water compositions complicate our ability to estimate the impact of leaking CO2 on groundwater quality. We combined lithologic field data from the High Plains Aquifer, numerical simulations, and uncertainty quantification analysis to assess the role of aquifer heterogeneity and physical transport on the extent of CO2 impacted plume over a 100-year period. The High Plains aquifer is a major aquifer over much of the central United States where CO2 may be sequestered in depleted oil and gas reservoirs or deep saline formations. Input parameters considered included, aquifer heterogeneity, permeability, porosity, regional groundwater flow, CO2 and TDS leakage rates over time, and the number of leakage source points. Sensitivity analysis suggest that variations in sand and clay permeability, correlation lengths, van Genuchten parameters, and CO2 leakage rate have the greatest impact on impacted volume or maximum distance from the leak source. A key finding is that relative sensitivity of the parameters changes over the 100-year period. Reduced order models developed from regression of the numerical simulations show that volume of the CO2-impacted aquifer increases over time with 2 order of magnitude variance.

  16. Modelling of plasma-based dry reforming: how do uncertainties in the input data affect the calculation results?

    NASA Astrophysics Data System (ADS)

    Wang, Weizong; Berthelot, Antonin; Zhang, Quanzhi; Bogaerts, Annemie

    2018-05-01

    One of the main issues in plasma chemistry modeling is that the cross sections and rate coefficients are subject to uncertainties, which yields uncertainties in the modeling results and hence hinders the predictive capabilities. In this paper, we reveal the impact of these uncertainties on the model predictions of plasma-based dry reforming in a dielectric barrier discharge. For this purpose, we performed a detailed uncertainty analysis and sensitivity study. 2000 different combinations of rate coefficients, based on the uncertainty from a log-normal distribution, are used to predict the uncertainties in the model output. The uncertainties in the electron density and electron temperature are around 11% and 8% at the maximum of the power deposition for a 70% confidence level. Still, this can have a major effect on the electron impact rates and hence on the calculated conversions of CO2 and CH4, as well as on the selectivities of CO and H2. For the CO2 and CH4 conversion, we obtain uncertainties of 24% and 33%, respectively. For the CO and H2 selectivity, the corresponding uncertainties are 28% and 14%, respectively. We also identify which reactions contribute most to the uncertainty in the model predictions. In order to improve the accuracy and reliability of plasma chemistry models, we recommend using only verified rate coefficients, and we point out the need for dedicated verification experiments.

  17. A 45 ps time digitizer with a two-phase clock and dual-edge two-stage interpolation in a field programmable gate array device

    NASA Astrophysics Data System (ADS)

    Szplet, R.; Kalisz, J.; Jachna, Z.

    2009-02-01

    We present a time digitizer having 45 ps resolution, integrated in a field programmable gate array (FPGA) device. The time interval measurement is based on the two-stage interpolation method. A dual-edge two-phase interpolator is driven by the on-chip synthesized 250 MHz clock with precise phase adjustment. An improved dual-edge double synchronizer was developed to control the main counter. The nonlinearity of the digitizer's transfer characteristic is identified and utilized by the dedicated hardware code processor for the on-the-fly correction of the output data. Application of presented ideas has resulted in the measurement uncertainty of the digitizer below 70 ps RMS over the time interval ranging from 0 to 1 s. The use of the two-stage interpolation and a fast FIFO memory has allowed us to obtain the maximum measurement rate of five million measurements per second.

  18. Stochastic sediment property inversion in Shallow Water 06.

    PubMed

    Michalopoulou, Zoi-Heleni

    2017-11-01

    Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.

  19. On the uncertainty in single molecule fluorescent lifetime and energy emission measurements

    NASA Technical Reports Server (NTRS)

    Brown, Emery N.; Zhang, Zhenhua; Mccollom, Alex D.

    1995-01-01

    Time-correlated single photon counting has recently been combined with mode-locked picosecond pulsed excitation to measure the fluorescent lifetimes and energy emissions of single molecules in a flow stream. Maximum likelihood (ML) and least square methods agree and are optimal when the number of detected photons is large however, in single molecule fluorescence experiments the number of detected photons can be less than 20, 67% of those can be noise and the detection time is restricted to 10 nanoseconds. Under the assumption that the photon signal and background noise are two independent inhomogeneous poisson processes, we derive the exact joint arrival time probably density of the photons collected in a single counting experiment performed in the presence of background noise. The model obviates the need to bin experimental data for analysis, and makes it possible to analyze formally the effect of background noise on the photon detection experiment using both ML or Bayesian methods. For both methods we derive the joint and marginal probability densities of the fluorescent lifetime and fluorescent emission. the ML and Bayesian methods are compared in an analysis of simulated single molecule fluorescence experiments of Rhodamine 110 using different combinations of expected background nose and expected fluorescence emission. While both the ML or Bayesian procedures perform well for analyzing fluorescence emissions, the Bayesian methods provide more realistic measures of uncertainty in the fluorescent lifetimes. The Bayesian methods would be especially useful for measuring uncertainty in fluorescent lifetime estimates in current single molecule flow stream experiments where the expected fluorescence emission is low. Both the ML and Bayesian algorithms can be automated for applications in molecular biology.

  20. On the Uncertainty in Single Molecule Fluorescent Lifetime and Energy Emission Measurements

    NASA Technical Reports Server (NTRS)

    Brown, Emery N.; Zhang, Zhenhua; McCollom, Alex D.

    1996-01-01

    Time-correlated single photon counting has recently been combined with mode-locked picosecond pulsed excitation to measure the fluorescent lifetimes and energy emissions of single molecules in a flow stream. Maximum likelihood (ML) and least squares methods agree and are optimal when the number of detected photons is large, however, in single molecule fluorescence experiments the number of detected photons can be less than 20, 67 percent of those can be noise, and the detection time is restricted to 10 nanoseconds. Under the assumption that the photon signal and background noise are two independent inhomogeneous Poisson processes, we derive the exact joint arrival time probability density of the photons collected in a single counting experiment performed in the presence of background noise. The model obviates the need to bin experimental data for analysis, and makes it possible to analyze formally the effect of background noise on the photon detection experiment using both ML or Bayesian methods. For both methods we derive the joint and marginal probability densities of the fluorescent lifetime and fluorescent emission. The ML and Bayesian methods are compared in an analysis of simulated single molecule fluorescence experiments of Rhodamine 110 using different combinations of expected background noise and expected fluorescence emission. While both the ML or Bayesian procedures perform well for analyzing fluorescence emissions, the Bayesian methods provide more realistic measures of uncertainty in the fluorescent lifetimes. The Bayesian methods would be especially useful for measuring uncertainty in fluorescent lifetime estimates in current single molecule flow stream experiments where the expected fluorescence emission is low. Both the ML and Bayesian algorithms can be automated for applications in molecular biology.

  1. Effects of body position on exercise capacity and pulmonary vascular pressure-flow relationships.

    PubMed

    Forton, Kevin; Motoji, Yoshiki; Deboeck, Gael; Faoro, Vitalie; Naeije, Robert

    2016-11-01

    There has been revival of interest in exercise testing of the pulmonary circulation for the diagnosis of pulmonary vascular disease, but there still is uncertainty about body position and the most relevant measurements. Doppler echocardiography pulmonary hemodynamic measurements were performed at progressively increased workloads in 26 healthy adult volunteers in supine, semirecumbent, and upright positions that were randomly assigned at 24-h intervals. Mean pulmonary artery pressure (mPAP) was estimated from the maximum tricuspid regurgitation jet velocity. Cardiac output was calculated from the left ventricular outflow velocity-time integral. Pulmonary vascular distensibility α-index, the percent change of vessel diameter per millimeter mercury of mPAP, was calculated from multipoint mPAP-cardiac output plots. Body position did not affect maximum oxygen uptake (Vo 2max ), maximum respiratory exchange ratio, ventilatory equivalent for carbon dioxide, or slope of mPAP-cardiac output relationships, which was on average of 1.5 ± 0.4 mmHg·l -1 ·min -1 Maximum mPAP, cardiac output, and total pulmonary vascular resistance were, respectively, 34 ± 4 mmHg, 18 ± 3 l/min, and 1.9 ± 0.3 Wood units. However, the semirecumbent position was associated with a 10% decrease in maximum workload. Furthermore, cardiac output-workload or cardiac output-Vo 2 relationships were nonlinear and variable. These results suggest that body position does not affect maximum exercise testing of the pulmonary circulation when results are expressed as mPAP-cardiac output or maximum total pulmonary vascular resistance. Maximum workload is decreased in semirecumbent compared with upright exercise. Workload or Vo 2 cannot reliably be used as surrogates for cardiac output. Copyright © 2016 the American Physiological Society.

  2. Paleoclimatic Reconstructions From Plant Macrofossils For The Last Glacial Maximum, Middle Holocene, And Latest Holocene In The American Southwest

    NASA Astrophysics Data System (ADS)

    Thompson, R. S.; Anderson, K.; Pelltier, R.; Strickland, L. E.; Shafer, S. L.; Bartlein, P. J.

    2013-12-01

    Fossil plant remains preserved in a variety of geologic settings provide direct evidence of where individual species lived in the past, and there are long-established methods for paleoclimatic reconstructions based on comparisons between modern and past geographic ranges of plant species. In principle, these methods use relatively straightforward procedures that frequently result in what appear to be very precise estimates of past temperature and moisture conditions. The reconstructed estimates can be mapped for specific time slices for synoptic-scale reconstructions for data-model comparisons. Although paleobotanical data can provide apparently precise estimates of past climatic conditions, it is difficult to gauge the associated uncertainties. The estimates may be affected by the choice of modern calibration data, reconstruction methods employed, and whether the climatic variable under consideration is an important determinant of the distributions of the species being considered. For time-slice reconstructions, there are also issues involving the adequacy of the spatial coverage of the fossil data and the degree of variability through time. To examine some of these issues, we estimated annual precipitation and summer and winter temperatures for the Last Glacial Maximum (LGM, 21000 × 1000 yr BP), Middle Holocene (MH, 6000 × 500 yr BP), and Latest Holocene (LH, the last 500 yrs), based on the application of four quantitative approaches to paleobotanical assemblages preserved in packrat middens in the American Southwest. Our results indicate that historic variability and difficulties in interpolating climatic values to fossil sites may impose ranges of uncertainties of more than × 1°C for temperature and × 50 mm for annual precipitation. Climatic estimates based on modern midden assemblages generally fall within these ranges, although there may be biases that differ regionally. Samples of similar age and location provide similar climatic estimates, and the four approaches usually result in anomalies of the same sign, but with differing amplitudes. There is considerable variability among the anomalies for samples within each time slice, and different time slices have different geographic coverages of samples. The reconstructed temperature anomalies are similar between the MH and LH time slices, and generally fall within the uncertainties related to the modern climatic data. LGM anomalies were significantly colder, and for many samples exceeded -5°C in both winter and summer. There are what appear to be significant MH annual precipitation anomalies to the south (dry after 6.2 ka)and to the northwest (wet before 6.2 ka), but it may be misleading to compare these, given the differences in age. Positive annual precipitation anomalies for the LGM are more than 100 mm in the northwest, and smaller in the northeast and south.

  3. Biomass Thermogravimetric Analysis: Uncertainty Determination Methodology and Sampling Maps Generation

    PubMed Central

    Pazó, Jose A.; Granada, Enrique; Saavedra, Ángeles; Eguía, Pablo; Collazo, Joaquín

    2010-01-01

    The objective of this study was to develop a methodology for the determination of the maximum sampling error and confidence intervals of thermal properties obtained from thermogravimetric analysis (TG), including moisture, volatile matter, fixed carbon and ash content. The sampling procedure of the TG analysis was of particular interest and was conducted with care. The results of the present study were compared to those of a prompt analysis, and a correlation between the mean values and maximum sampling errors of the methods were not observed. In general, low and acceptable levels of uncertainty and error were obtained, demonstrating that the properties evaluated by TG analysis were representative of the overall fuel composition. The accurate determination of the thermal properties of biomass with precise confidence intervals is of particular interest in energetic biomass applications. PMID:20717532

  4. Line-averaging measurement methods to estimate the gap in the CO2 balance closure - possibilities, challenges, and uncertainties

    NASA Astrophysics Data System (ADS)

    Ziemann, Astrid; Starke, Manuela; Schütze, Claudia

    2017-11-01

    An imbalance of surface energy fluxes using the eddy covariance (EC) method is observed in global measurement networks although all necessary corrections and conversions are applied to the raw data. Mainly during nighttime, advection can occur, resulting in a closing gap that consequently should also affect the CO2 balances. There is the crucial need for representative concentration and wind data to measure advective fluxes. Ground-based remote sensing techniques are an ideal tool as they provide the spatially representative CO2 concentration together with wind components within the same voxel structure. For this purpose, the presented SQuAd (Spatially resolved Quantification of the Advection influence on the balance closure of greenhouse gases) approach applies an integrated method combination of acoustic and optical remote sensing. The innovative combination of acoustic travel-time tomography (A-TOM) and open-path Fourier-transform infrared spectroscopy (OP-FTIR) will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation offers the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other greenhouse gases (GHGs). A-TOM is a scalable method to remotely resolve 3-D wind and temperature fields. The paper will give an overview about the proposed SQuAd approach and first results of experimental tests at the FLUXNET site Grillenburg in Germany. Preliminary results of the comprehensive experiments reveal a mean nighttime horizontal advection of CO2 of about 10 µmol m-2 s-1 estimated by the spatially integrating and representative SQuAd method. Additionally, uncertainties in determining CO2 concentrations using passive OP-FTIR and wind speed applying A-TOM are systematically quantified. The maximum uncertainty for CO2 concentration was estimated due to environmental parameters, instrumental characteristics, and retrieval procedure with a total amount of approximately 30 % for a single measurement. Instantaneous wind components can be derived with a maximum uncertainty of 0.3 m s-1 depending on sampling, signal analysis, and environmental influences on sound propagation. Averaging over a period of 30 min, the standard error of the mean values can be decreased by a factor of at least 0.5 for OP-FTIR and 0.1 for A-TOM depending on the required spatial resolution. The presented validation of the joint application of the two independent, nonintrusive methods is in the focus of attention concerning their ability to quantify advective fluxes.

  5. An individualized strategy to estimate the effect of deformable registration uncertainty on accumulated dose in the upper abdomen

    NASA Astrophysics Data System (ADS)

    Wang, Yibing; Petit, Steven F.; Vásquez Osorio, Eliana; Gupta, Vikas; Méndez Romero, Alejandra; Heijmen, Ben

    2018-06-01

    In the abdomen, it is challenging to assess the accuracy of deformable image registration (DIR) for individual patients, due to the lack of clear anatomical landmarks, which can hamper clinical applications that require high accuracy DIR, such as adaptive radiotherapy. In this study, we propose and evaluate a methodology for estimating the impact of uncertainties in DIR on calculated accumulated dose in the upper abdomen, in order to aid decision making in adaptive treatment approaches. Sixteen liver metastasis patients treated with SBRT were evaluated. Each patient had one planning and three daily treatment CT-scans. Each daily CT scan was deformably registered 132 times to the planning CT-scan, using a wide range of parameter settings for the registration algorithm. A subset of ‘realistic’ registrations was then objectively selected based on distances between mapped and target contours. The underlying 3D transformations of these registrations were used to assess the corresponding uncertainties in voxel positions, and delivered dose, with a focus on accumulated maximum doses in the hollow OARs, i.e. esophagus, stomach, and duodenum. The number of realistic registrations varied from 5 to 109, depending on the patient, emphasizing the need for individualized registration parameters. Considering for all patients the realistic registrations, the 99th percentile of the voxel position uncertainties was 5.6  ±  3.3 mm. This translated into a variation (difference between 1st and 99th percentile) in accumulated D max in hollow OARs of up to 3.3 Gy. For one patient a violation of the accumulated stomach dose outside the uncertainty band was detected. The observed variation in accumulated doses in the OARs related to registration uncertainty, emphasizes the need to investigate the impact of this uncertainty for any DIR algorithm prior to clinical use for dose accumulation. The proposed method for assessing on an individual patient basis the impact of uncertainties in DIR on accumulated dose is in principle applicable for all DIR algorithms allowing variation in registration parameters.

  6. Exact Algorithms for Duplication-Transfer-Loss Reconciliation with Non-Binary Gene Trees.

    PubMed

    Kordi, Misagh; Bansal, Mukul S

    2017-06-01

    Duplication-Transfer-Loss (DTL) reconciliation is a powerful method for studying gene family evolution in the presence of horizontal gene transfer. DTL reconciliation seeks to reconcile gene trees with species trees by postulating speciation, duplication, transfer, and loss events. Efficient algorithms exist for finding optimal DTL reconciliations when the gene tree is binary. In practice, however, gene trees are often non-binary due to uncertainty in the gene tree topologies, and DTL reconciliation with non-binary gene trees is known to be NP-hard. In this paper, we present the first exact algorithms for DTL reconciliation with non-binary gene trees. Specifically, we (i) show that the DTL reconciliation problem for non-binary gene trees is fixed-parameter tractable in the maximum degree of the gene tree, (ii) present an exponential-time, but in-practice efficient, algorithm to track and enumerate all optimal binary resolutions of a non-binary input gene tree, and (iii) apply our algorithms to a large empirical data set of over 4700 gene trees from 100 species to study the impact of gene tree uncertainty on DTL-reconciliation and to demonstrate the applicability and utility of our algorithms. The new techniques and algorithms introduced in this paper will help biologists avoid incorrect evolutionary inferences caused by gene tree uncertainty.

  7. Tunable Diode Laser Absorption Spectroscopy Sensor for Calibration Free Humidity Measurements in Pure Methane and Low CO2 Natural Gas.

    PubMed

    Nwaboh, Javis Anyangwe; Pratzler, Sonja; Werhahn, Olav; Ebert, Volker

    2017-05-01

    We report a new direct tunable diode laser absorption spectroscopy (dTDLAS) sensor for absolute measurements of H 2 O in methane, ethane, propane, and low CO 2 natural gas. The sensor is operated with a 2.7 µm DFB laser, equipped with a high pressure single pass gas cell, and used to measure H 2 O amount of substance fractions in the range of 0.31-25 000 µmol/mol. Operating total gas pressures are up to 5000 hPa. The sensor has been characterized, addressing the traceability of the spectrometric results to the SI and the evaluation of the combined uncertainty, following the guide to the expression of uncertainty in measurement (GUM). The relative reproducibility of H 2 O amount of substance fraction measurements at 87 µmol/mol is 0.26% (0.23 µmol/mol). The maximum precision of the sensor was determined using a H 2 O in methane mixture, and found to be 40 nmol/mol for a time resolution of 100 s. This corresponds to a normalized detection limit of 330 nmol mol -1 ·m Hz -1/2 . The relative combined uncertainty of H 2 O amount fraction measurements delivered by the sensor is 1.2%.

  8. Application of Monte Carlo Methods to Perform Uncertainty and Sensitivity Analysis on Inverse Water-Rock Reactions with NETPATH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGraw, David; Hershey, Ronald L.

    Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries.more » The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little variation in source-water fraction between the deterministic and Monte Carlo approaches, and therefore, little variation in travel times between approaches. Sensitivity analysis proved very useful for identifying the most important input constraints (dissolved-ion concentrations), which can reveal the variables that have the most influence on source-water fractions and carbon-14 travel times. Once these variables are determined, more focused effort can be applied to determining the proper distribution for each constraint. Second, Monte Carlo results for water-rock reaction modeling showed discrete and nonunique results. The NETPATH models provide the solutions that satisfy the constraints of upgradient and downgradient water chemistry. There can exist multiple, discrete solutions for any scenario and these discrete solutions cause grouping of results. As a result, the variability in output may not easily be represented by a single distribution or a mean and variance and care should be taken in the interpretation and reporting of results.« less

  9. Reservoir Performance Under Future Climate For Basins With Different Hydrologic Sensitivities

    NASA Astrophysics Data System (ADS)

    Mateus, M. C.; Tullos, D. D.

    2013-12-01

    In addition to long-standing uncertainties related to variable inflows and market price of power, reservoir operators face a number of new uncertainties related to hydrologic nonstationarity, changing environmental regulations, and rapidly growing water and energy demands. This study investigates the impact, sensitivity, and uncertainty of changing hydrology on hydrosystem performance across different hydrogeologic settings. We evaluate the performance of reservoirs in the Santiam River basin, including a case study in the North Santiam Basin, with high permeability and extensive groundwater storage, and the South Santiam Basin, with low permeability, little groundwater storage and rapid runoff response. The modeling objective is to address the following study questions: (1) for the two hydrologic regimes, how does the flood management, water supply, and environmental performance of current reservoir operations change under future 2.5, 50 and 97.5 percentile streamflow projections; and (2) how much change in inflow is required to initiate a failure to meet downstream minimum or maximum flows in the future. We couple global climate model results with a rainfall-runoff model and a formal Bayesian uncertainty analysis to simulate future inflow hydrographs as inputs to a reservoir operations model. To evaluate reservoir performance under a changing climate, we calculate reservoir refill reliability, changes in flood frequency, and reservoir time and volumetric reliability of meeting minimum spring and summer flow target. Reservoir performance under future hydrology appears to vary with hydrogeology. We find higher sensitivity to floods for the North Santiam Basin and higher sensitivity to minimum flow targets for the South Santiam Basin. Higher uncertainty is related with basins with a more complex hydrologeology. Results from model simulations contribute to understanding of the reliability and vulnerability of reservoirs to a changing climate.

  10. The finite, kinematic rupture properties of great-sized earthquakes since 1990

    USGS Publications Warehouse

    Hayes, Gavin

    2017-01-01

    Here, I present a database of >160 finite fault models for all earthquakes of M 7.5 and above since 1990, created using a consistent modeling approach. The use of a common approach facilitates easier comparisons between models, and reduces uncertainties that arise when comparing models generated by different authors, data sets and modeling techniques.I use this database to verify published scaling relationships, and for the first time show a clear and intriguing relationship between maximum potency (the product of slip and area) and average potency for a given earthquake. This relationship implies that earthquakes do not reach the potential size given by the tectonic load of a fault (sometimes called “moment deficit,” calculated via a plate rate over time since the last earthquake, multiplied by geodetic fault coupling). Instead, average potency (or slip) scales with but is less than maximum potency (dictated by tectonic loading). Importantly, this relationship facilitates a more accurate assessment of maximum earthquake size for a given fault segment, and thus has implications for long-term hazard assessments. The relationship also suggests earthquake cycles may not completely reset after a large earthquake, and thus repeat rates of such events may appear shorter than is expected from tectonic loading. This in turn may help explain the phenomenon of “earthquake super-cycles” observed in some global subduction zones.

  11. The finite, kinematic rupture properties of great-sized earthquakes since 1990

    NASA Astrophysics Data System (ADS)

    Hayes, Gavin P.

    2017-06-01

    Here, I present a database of >160 finite fault models for all earthquakes of M 7.5 and above since 1990, created using a consistent modeling approach. The use of a common approach facilitates easier comparisons between models, and reduces uncertainties that arise when comparing models generated by different authors, data sets and modeling techniques. I use this database to verify published scaling relationships, and for the first time show a clear and intriguing relationship between maximum potency (the product of slip and area) and average potency for a given earthquake. This relationship implies that earthquakes do not reach the potential size given by the tectonic load of a fault (sometimes called ;moment deficit,; calculated via a plate rate over time since the last earthquake, multiplied by geodetic fault coupling). Instead, average potency (or slip) scales with but is less than maximum potency (dictated by tectonic loading). Importantly, this relationship facilitates a more accurate assessment of maximum earthquake size for a given fault segment, and thus has implications for long-term hazard assessments. The relationship also suggests earthquake cycles may not completely reset after a large earthquake, and thus repeat rates of such events may appear shorter than is expected from tectonic loading. This in turn may help explain the phenomenon of ;earthquake super-cycles; observed in some global subduction zones.

  12. Long Duration Exposure Facility (LDEF) attitude measurements of the Interplanetary Dust Experiment

    NASA Technical Reports Server (NTRS)

    Kassel, Philip C., Jr.; Motley, William R., III; Singer, S. Fred; Mulholland, J. Derral; Oliver, John P.; Weinberg, Jerry L.; Cooke, William J.; Wortman, Jim J.

    1993-01-01

    Analysis of the data from the Long Duration Exposure Facility (LDEF) Interplanetary Dust Experiment (IDE) sun sensors has allowed a confirmation of the attitude of LDEF during its first year in orbit. Eight observations of the yaw angle at specific times were made and are tabulated in this paper. These values range from 4.3 to 12.4 deg with maximum uncertainty of plus or minus 2.0 deg and an average of 7.9 deg. No specific measurements of pitch or roll were made but the data indicates that LDEF had an average pitch down attitude of less than 0.7 deg.

  13. Simulation of water-table aquifers using specified saturated thickness

    USGS Publications Warehouse

    Sheets, Rodney A.; Hill, Mary C.; Haitjema, Henk M.; Provost, Alden M.; Masterson, John P.

    2014-01-01

    Simulating groundwater flow in a water-table (unconfined) aquifer can be difficult because the saturated thickness available for flow depends on model-calculated hydraulic heads. It is often possible to realize substantial time savings and still obtain accurate head and flow solutions by specifying an approximate saturated thickness a priori, thus linearizing this aspect of the model. This specified-thickness approximation often relies on the use of the “confined” option in numerical models, which has led to confusion and criticism of the method. This article reviews the theoretical basis for the specified-thickness approximation, derives an error analysis for relatively ideal problems, and illustrates the utility of the approximation with a complex test problem. In the transient version of our complex test problem, the specified-thickness approximation produced maximum errors in computed drawdown of about 4% of initial aquifer saturated thickness even when maximum drawdowns were nearly 20% of initial saturated thickness. In the final steady-state version, the approximation produced maximum errors in computed drawdown of about 20% of initial aquifer saturated thickness (mean errors of about 5%) when maximum drawdowns were about 35% of initial saturated thickness. In early phases of model development, such as during initial model calibration efforts, the specified-thickness approximation can be a very effective tool to facilitate convergence. The reduced execution time and increased stability obtained through the approximation can be especially useful when many model runs are required, such as during inverse model calibration, sensitivity and uncertainty analyses, multimodel analysis, and development of optimal resource management scenarios.

  14. Maximum nonlocality and minimum uncertainty using magic states

    NASA Astrophysics Data System (ADS)

    Howard, Mark

    2015-04-01

    We prove that magic states from the Clifford hierarchy give optimal solutions for tasks involving nonlocality and entropic uncertainty with respect to Pauli measurements. For both the nonlocality and uncertainty tasks, stabilizer states are the worst possible pure states, so our solutions have an operational interpretation as being highly nonstabilizer. The optimal strategy for a qudit version of the Clauser-Horne-Shimony-Holt game in prime dimensions is achieved by measuring maximally entangled states that are isomorphic to single-qudit magic states. These magic states have an appealingly simple form, and our proof shows that they are "balanced" with respect to all but one of the mutually unbiased stabilizer bases. Of all equatorial qudit states, magic states minimize the average entropic uncertainties for collision entropy and also, for small prime dimensions, min-entropy, a fact that may have implications for cryptography.

  15. Allowable Trajectory Variations for Space Shuttle Orbiter Entry-Aeroheating CFD

    NASA Technical Reports Server (NTRS)

    Wood, William A.; Alter, Stephen J.

    2008-01-01

    Reynolds-number criteria are developed for acceptable variations in Space Shuttle Orbiter entry trajectories for use in computational aeroheating analyses. The criteria determine if an existing computational fluid dynamics solution for a particular trajectory can be extrapolated to a different trajectory. The criteria development begins by estimating uncertainties for seventeen types of computational aeroheating data, such as boundary layer thickness, at exact trajectory conditions. For each type of datum, the allowable uncertainty contribution due to trajectory variation is set to be half of the value of the estimated exact-trajectory uncertainty. Then, for the twelve highest-priority datum types, Reynolds-number relations between trajectory variation and output uncertainty are determined. From these relations the criteria are established for the maximum allowable trajectory variations. The most restrictive criterion allows a 25% variation in Reynolds number at constant Mach number between trajectories.

  16. Ocean (de)oxygenation from the Last Glacial Maximum to the twenty-first century: insights from Earth System models.

    PubMed

    Bopp, L; Resplandy, L; Untersee, A; Le Mezo, P; Kageyama, M

    2017-09-13

    All Earth System models project a consistent decrease in the oxygen content of oceans for the coming decades because of ocean warming, reduced ventilation and increased stratification. But large uncertainties for these future projections of ocean deoxygenation remain for the subsurface tropical oceans where the major oxygen minimum zones are located. Here, we combine global warming projections, model-based estimates of natural short-term variability, as well as data and model estimates of the Last Glacial Maximum (LGM) ocean oxygenation to gain some insights into the major mechanisms of oxygenation changes across these different time scales. We show that the primary uncertainty on future ocean deoxygenation in the subsurface tropical oceans is in fact controlled by a robust compensation between decreasing oxygen saturation (O 2sat ) due to warming and decreasing apparent oxygen utilization (AOU) due to increased ventilation of the corresponding water masses. Modelled short-term natural variability in subsurface oxygen levels also reveals a compensation between O 2sat and AOU, controlled by the latter. Finally, using a model simulation of the LGM, reproducing data-based reconstructions of past ocean (de)oxygenation, we show that the deoxygenation trend of the subsurface ocean during deglaciation was controlled by a combination of warming-induced decreasing O 2sat and increasing AOU driven by a reduced ventilation of tropical subsurface waters.This article is part of the themed issue 'Ocean ventilation and deoxygenation in a warming world'. © 2017 The Author(s).

  17. Ocean (de)oxygenation from the Last Glacial Maximum to the twenty-first century: insights from Earth System models

    NASA Astrophysics Data System (ADS)

    Bopp, L.; Resplandy, L.; Untersee, A.; Le Mezo, P.; Kageyama, M.

    2017-08-01

    All Earth System models project a consistent decrease in the oxygen content of oceans for the coming decades because of ocean warming, reduced ventilation and increased stratification. But large uncertainties for these future projections of ocean deoxygenation remain for the subsurface tropical oceans where the major oxygen minimum zones are located. Here, we combine global warming projections, model-based estimates of natural short-term variability, as well as data and model estimates of the Last Glacial Maximum (LGM) ocean oxygenation to gain some insights into the major mechanisms of oxygenation changes across these different time scales. We show that the primary uncertainty on future ocean deoxygenation in the subsurface tropical oceans is in fact controlled by a robust compensation between decreasing oxygen saturation (O2sat) due to warming and decreasing apparent oxygen utilization (AOU) due to increased ventilation of the corresponding water masses. Modelled short-term natural variability in subsurface oxygen levels also reveals a compensation between O2sat and AOU, controlled by the latter. Finally, using a model simulation of the LGM, reproducing data-based reconstructions of past ocean (de)oxygenation, we show that the deoxygenation trend of the subsurface ocean during deglaciation was controlled by a combination of warming-induced decreasing O2sat and increasing AOU driven by a reduced ventilation of tropical subsurface waters. This article is part of the themed issue 'Ocean ventilation and deoxygenation in a warming world'.

  18. An analysis and demonstration of clock synchronization by VLBI. [Very Long Baseline Interferometry for Deep Space Net

    NASA Technical Reports Server (NTRS)

    Hurd, W. J.

    1974-01-01

    A prototype of a semi-real time system for synchronizing the Deep Space Net station clocks by radio interferometry was successfully demonstrated on August 30, 1972. The system utilized an approximate maximum likelihood estimation procedure for processing the data, thereby achieving essentially optimum time sync estimates for a given amount of data, or equivalently, minimizing the amount of data required for reliable estimation. Synchronization accuracies as good as 100 ns rms were achieved between Deep Space Stations 11 and 12, both at Goldstone, Calif. The accuracy can be improved by increasing the system bandwidth until the fundamental limitations due to baseline and source position uncertainties and atmospheric effects are reached. These limitations are under 10 ns for transcontinental baselines.

  19. M≥7 Earthquake rupture forecast and time-dependent probability for the Sea of Marmara region, Turkey

    USGS Publications Warehouse

    Murru, Maura; Akinci, Aybige; Falcone, Guiseppe; Pucci, Stefano; Console, Rodolfo; Parsons, Thomas E.

    2016-01-01

    We forecast time-independent and time-dependent earthquake ruptures in the Marmara region of Turkey for the next 30 years using a new fault-segmentation model. We also augment time-dependent Brownian Passage Time (BPT) probability with static Coulomb stress changes (ΔCFF) from interacting faults. We calculate Mw > 6.5 probability from 26 individual fault sources in the Marmara region. We also consider a multisegment rupture model that allows higher-magnitude ruptures over some segments of the Northern branch of the North Anatolian Fault Zone (NNAF) beneath the Marmara Sea. A total of 10 different Mw=7.0 to Mw=8.0 multisegment ruptures are combined with the other regional faults at rates that balance the overall moment accumulation. We use Gaussian random distributions to treat parameter uncertainties (e.g., aperiodicity, maximum expected magnitude, slip rate, and consequently mean recurrence time) of the statistical distributions associated with each fault source. We then estimate uncertainties of the 30-year probability values for the next characteristic event obtained from three different models (Poisson, BPT, and BPT+ΔCFF) using a Monte Carlo procedure. The Gerede fault segment located at the eastern end of the Marmara region shows the highest 30-yr probability, with a Poisson value of 29%, and a time-dependent interaction probability of 48%. We find an aggregated 30-yr Poisson probability of M >7.3 earthquakes at Istanbul of 35%, which increases to 47% if time dependence and stress transfer are considered. We calculate a 2-fold probability gain (ratio time-dependent to time-independent) on the southern strands of the North Anatolian Fault Zone.

  20. Optimal robust control strategy of a solid oxide fuel cell system

    NASA Astrophysics Data System (ADS)

    Wu, Xiaojuan; Gao, Danhui

    2018-01-01

    Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.

  1. Evaluation de l'impact du vent et des manoeuvres hydrauliques sur le calcul des apports naturels par bilan hydrique pour un reservoir hydroelectrique

    NASA Astrophysics Data System (ADS)

    Roy, Mathieu

    Natural inflow is an important data for a water resource manager. In fact, Hydro-Quebec uses historical natural inflow data to perform a daily prediction of the amount of water that will be received in each of its hydroelectric reservoirs. This prediction allows the establishment of reservoir operating rules in order to optimize hydropower without compromising the safety of hydraulic structures. To obtain an accurate prediction, it follows that the system's input needs to be very well known. However, it can be very difficult to accurately measure the natural supply of a set of regulated reservoirs. Therefore, Hydro-Quebec uses an indirect method of calculation. This method consists of evaluating the reservoir's inflow using the water balance equation. Yet, this equation is not immune to errors and uncertainties. Water level measurement is an important input in order to compute the water balance equation. However, several sources of uncertainty including the effect of wind and hydraulic maneuvers can affect the readings of limnimetric gages. Fluctuations in water level caused by these effects carry over in the water balance equation. Consequently, natural inflow's signal may become noisy and affected by external errors. The main objective of this report is to evaluate the uncertainty caused by the effects of wind and hydraulic maneuvers on water balance equation. To this end, hydrodynamic models of reservoirs Outardes 4 and Gouin were prepared. According to the literature review, wind effects can be studied either by an unsteady state approach or by assuming steady state approach. Unsteady state simulation of wind effects on reservoir Gouin and Outardes 4 were performed by hydrodynamic modelling. Consideration of an unsteady state implies that the wind conditions vary throughout the simulation. This feature allows taking into account temporal effect of wind duration. In addition, it also allows the consideration of inertial forces such as seiches which are caused by wind conditions that can vary abruptly. Once the models were calibrated, unsteady state simulations were conducted in closed system where unsteady observed winds were the only forces included. From the simulated water levels obtained at each gage, water balance equation was calculated to determine the daily uncertainty of natural inflow in unsteady conditions. At Outardes 4, a maximum uncertainty of 20 m3/s was estimated during the month of October 2010. On the other hand, at the Gouin reservoir, a maximum uncertainty of 340m3/s was estimated during the month of July 2012. Steady state modelling is another approach to evaluate wind effect uncertainty in the water balance equation. This type of approach consists of assuming that the water level is instantly tilted under the influence of wind. Hence, temporal effect of wind duration and seiches cannot be taken into account. However, the advantage of steady state modelling is that it's better suited than unsteady state modelling to evaluate wind uncertainty in real time. Two steady state modelling methods were experimented to estimate water level difference between gages in function of wind characteristics: hydrodynamic modelling and non-parametric regression. It has been found that non-parametric models are more efficient when it comes to estimate water level differences between gages. However, the use of hydrodynamic model demonstrated that to study wind uncertainty in the water balance equation, it is preferable to assess wind responses individually at each gage instead of using water level differences. Finally, a combination method of water level gages observations has been developed. It allows reducing wind/hydraulic maneuvers impacts on the water balance equation. This method, which is applicable in real time, consists of assigning a variable weight at each limnimetric gages. In other words, the weights automatically adjust in order to minimize steady state modeled wind responses. The estimation of hydraulic maneuvers has also been included in the gage weight adjustment. It has been found that this new combination method allows the correction of noisy natural inflow signal under wind and hydraulic maneuvers effects. However, some fluctuations persist which reflects the complexity of correcting these effects on a real time based daily water balance equation. (Abstract shortened by UMI.).

  2. The Importance of Behavioral Thresholds and Objective Functions in Contaminant Transport Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Sykes, J. F.; Kang, M.; Thomson, N. R.

    2007-12-01

    The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.

  3. Impacts of Process and Prediction Uncertainties on Projected Hanford Waste Glass Amount

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gervasio, Vivianaluxa; Vienna, John D.; Kim, Dong-Sang

    Analyses were performed to evaluate the impacts of using the advanced glass models, constraints (Vienna et al. 2016), and uncertainty descriptions on projected Hanford glass mass. The maximum allowable WOL was estimated for waste compositions while simultaneously satisfying all applicable glass property and composition constraints with sufficient confidence. Different components of prediction and composition/process uncertainties were systematically included in the calculations to evaluate their impacts on glass mass. The analyses estimated the production of 23,360 MT of IHLW glass when no uncertainties were taken into accound. Accounting for prediction and composition/process uncertainties resulted in 5.01 relative percent increase in estimatedmore » glass mass 24,531 MT. Roughly equal impacts were found for prediction uncertainties (2.58 RPD) and composition/process uncertainties (2.43 RPD). ILAW mass was predicted to be 282,350 MT without uncertainty and with weaste loading “line” rules in place. Accounting for prediction and composition/process uncertainties resulted in only 0.08 relative percent increase in estimated glass mass of 282,562 MTG. Without application of line rules the glass mass decreases by 10.6 relative percent (252,490 MT) for the case with no uncertainties. Addition of prediction uncertainties increases glass mass by 1.32 relative percent and the addition of composition/process uncertainties increase glass mass by an additional 7.73 relative percent (9.06 relative percent increase combined). The glass mass estimate without line rules (275,359 MT) was 2.55 relative percent lower than that with the line rules (282,562 MT), after accounting for all applicable uncertainties.« less

  4. Quantifying the intra-annual uncertainties in climate change assessment over 10 sub-basins across the Pacific Northwest US

    NASA Astrophysics Data System (ADS)

    Ahmadalipour, Ali; Moradkhani, Hamid; Rana, Arun

    2017-04-01

    Uncertainty is an inevitable feature of climate change impact assessments. Understanding and quantifying different sources of uncertainty is of high importance, which can help modeling agencies improve the current models and scenarios. In this study, we have assessed the future changes in three climate variables (i.e. precipitation, maximum temperature, and minimum temperature) over 10 sub-basins across the Pacific Northwest US. To conduct the study, 10 statistically downscaled CMIP5 GCMs from two downscaling methods (i.e. BCSD and MACA) were utilized at 1/16 degree spatial resolution for the historical period of 1970-2000 and future period of 2010-2099. For the future projections, two future scenarios of RCP4.5 and RCP8.5 were used. Furthermore, Bayesian Model Averaging (BMA) was employed to develop a probabilistic future projection for each climate variable. Results indicate superiority of BMA simulations compared to individual models. Increasing temperature and precipitation are projected at annual timescale. However, the changes are not uniform among different seasons. Model uncertainty shows to be the major source of uncertainty, while downscaling uncertainty significantly contributes to the total uncertainty, especially in summer.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, C.; Hanany, S.; Baccigalupi, C.

    We extend a general maximum likelihood foreground estimation for cosmic microwave background (CMB) polarization data to include estimation of instrumental systematic effects. We focus on two particular effects: frequency band measurement uncertainty and instrumentally induced frequency dependent polarization rotation. We assess the bias induced on the estimation of the B-mode polarization signal by these two systematic effects in the presence of instrumental noise and uncertainties in the polarization and spectral index of Galactic dust. Degeneracies between uncertainties in the band and polarization angle calibration measurements and in the dust spectral index and polarization increase the uncertainty in the extracted CMBmore » B-mode power, and may give rise to a biased estimate. We provide a quantitative assessment of the potential bias and increased uncertainty in an example experimental configuration. For example, we find that with 10% polarized dust, a tensor to scalar ratio of r = 0.05, and the instrumental configuration of the E and B experiment balloon payload, the estimated CMB B-mode power spectrum is recovered without bias when the frequency band measurement has 5% uncertainty or less, and the polarization angle calibration has an uncertainty of up to 4°.« less

  6. Analysis of mean seismic ground motion and its uncertainty based on the UCERF3 geologic slip rate model with uncertainty for California

    USGS Publications Warehouse

    Zeng, Yuehua

    2018-01-01

    The Uniform California Earthquake Rupture Forecast v.3 (UCERF3) model (Field et al., 2014) considers epistemic uncertainty in fault‐slip rate via the inclusion of multiple rate models based on geologic and/or geodetic data. However, these slip rates are commonly clustered about their mean value and do not reflect the broader distribution of possible rates and associated probabilities. Here, we consider both a double‐truncated 2σ Gaussian and a boxcar distribution of slip rates and use a Monte Carlo simulation to sample the entire range of the distribution for California fault‐slip rates. We compute the seismic hazard following the methodology and logic‐tree branch weights applied to the 2014 national seismic hazard model (NSHM) for the western U.S. region (Petersen et al., 2014, 2015). By applying a new approach developed in this study to the probabilistic seismic hazard analysis (PSHA) using precomputed rates of exceedance from each fault as a Green’s function, we reduce the computer time by about 10^5‐fold and apply it to the mean PSHA estimates with 1000 Monte Carlo samples of fault‐slip rates to compare with results calculated using only the mean or preferred slip rates. The difference in the mean probabilistic peak ground motion corresponding to a 2% in 50‐yr probability of exceedance is less than 1% on average over all of California for both the Gaussian and boxcar probability distributions for slip‐rate uncertainty but reaches about 18% in areas near faults compared with that calculated using the mean or preferred slip rates. The average uncertainties in 1σ peak ground‐motion level are 5.5% and 7.3% of the mean with the relative maximum uncertainties of 53% and 63% for the Gaussian and boxcar probability density function (PDF), respectively.

  7. Use of Linear Prediction Uncertainty Analysis to Guide Conditioning of Models Simulating Surface-Water/Groundwater Interactions

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; White, J.; Doherty, J.

    2011-12-01

    Linear prediction uncertainty analysis in a Bayesian framework was applied to guide the conditioning of an integrated surface water/groundwater model that will be used to predict the effects of groundwater withdrawals on surface-water and groundwater flows. Linear prediction uncertainty analysis is an effective approach for identifying (1) raw and processed data most effective for model conditioning prior to inversion, (2) specific observations and periods of time critically sensitive to specific predictions, and (3) additional observation data that would reduce model uncertainty relative to specific predictions. We present results for a two-dimensional groundwater model of a 2,186 km2 area of the Biscayne aquifer in south Florida implicitly coupled to a surface-water routing model of the actively managed canal system. The model domain includes 5 municipal well fields withdrawing more than 1 Mm3/day and 17 operable surface-water control structures that control freshwater releases from the Everglades and freshwater discharges to Biscayne Bay. More than 10 years of daily observation data from 35 groundwater wells and 24 surface water gages are available to condition model parameters. A dense parameterization was used to fully characterize the contribution of the inversion null space to predictive uncertainty and included bias-correction parameters. This approach allows better resolution of the boundary between the inversion null space and solution space. Bias-correction parameters (e.g., rainfall, potential evapotranspiration, and structure flow multipliers) absorb information that is present in structural noise that may otherwise contaminate the estimation of more physically-based model parameters. This allows greater precision in predictions that are entirely solution-space dependent, and reduces the propensity for bias in predictions that are not. Results show that application of this analysis is an effective means of identifying those surface-water and groundwater data, both raw and processed, that minimize predictive uncertainty, while simultaneously identifying the maximum solution-space dimensionality of the inverse problem supported by the data.

  8. Dealing with uncertainty in landscape genetic resistance models: a case of three co-occurring marsupials.

    PubMed

    Dudaniec, Rachael Y; Worthington Wilmer, Jessica; Hanson, Jeffrey O; Warren, Matthew; Bell, Sarah; Rhodes, Jonathan R

    2016-01-01

    Landscape genetics lacks explicit methods for dealing with the uncertainty in landscape resistance estimation, which is particularly problematic when sample sizes of individuals are small. Unless uncertainty can be quantified, valuable but small data sets may be rendered unusable for conservation purposes. We offer a method to quantify uncertainty in landscape resistance estimates using multimodel inference as an improvement over single model-based inference. We illustrate the approach empirically using co-occurring, woodland-preferring Australian marsupials within a common study area: two arboreal gliders (Petaurus breviceps, and Petaurus norfolcensis) and one ground-dwelling antechinus (Antechinus flavipes). First, we use maximum-likelihood and a bootstrap procedure to identify the best-supported isolation-by-resistance model out of 56 models defined by linear and non-linear resistance functions. We then quantify uncertainty in resistance estimates by examining parameter selection probabilities from the bootstrapped data. The selection probabilities provide estimates of uncertainty in the parameters that drive the relationships between landscape features and resistance. We then validate our method for quantifying uncertainty using simulated genetic and landscape data showing that for most parameter combinations it provides sensible estimates of uncertainty. We conclude that small data sets can be informative in landscape genetic analyses provided uncertainty can be explicitly quantified. Being explicit about uncertainty in landscape genetic models will make results more interpretable and useful for conservation decision-making, where dealing with uncertainty is critical. © 2015 John Wiley & Sons Ltd.

  9. Study of synthesis techniques for insensitive aircraft control systems

    NASA Technical Reports Server (NTRS)

    Harvey, C. A.; Pope, R. E.

    1977-01-01

    Insensitive flight control system design criteria was defined in terms of maximizing performance (handling qualities, RMS gust response, transient response, stability margins) over a defined parameter range. Wing load alleviation for the C-5A was chosen as a design problem. The C-5A model was a 79-state, two-control structure with uncertainties assumed to exist in dynamic pressure, structural damping and frequency, and the stability derivative, M sub w. Five new techniques (mismatch estimation, uncertainty weighting, finite dimensional inverse, maximum difficulty, dual Lyapunov) were developed. Six existing techniques (additive noise, minimax, multiplant, sensitivity vector augmentation, state dependent noise, residualization) and the mismatch estimation and uncertainty weighting techniques were synthesized and evaluated on the design example. Evaluation and comparison of these six techniques indicated that the minimax and the uncertainty weighting techniques were superior to the other six, and of these two, uncertainty weighting has lower computational requirements. Techniques based on the three remaining new concepts appear promising and are recommended for further research.

  10. Contain or eradicate? Optimizing the management goal for Australian acacia invasions in the face of uncertainty

    USGS Publications Warehouse

    Moore, J.L.; Runge, M.C.; Webber, B.L.; Wilson, J.R.U.

    2011-01-01

    Aim To identify whether eradication or containment is expected to be the most cost-effective management goal for an isolated invasive population when knowledge about the current extent is uncertain. Location Global and South Africa. Methods We developed a decision analysis framework to analyse the best management goal for an invasive species population (eradication, containment or take no action) when knowledge about the current extent is uncertain. We used value of information analysis to identify when investment in learning about the extent will improve this decision-making and tested the sensitivity of the conclusions to different parameters (e.g. spread rate, maximum extent, and management efficacy and cost). The model was applied to Acacia paradoxa DC, an Australian shrub with an estimated invasive extent of 310ha on Table Mountain, South Africa. Results Under the parameters used, attempting eradication is cost-effective for infestations of up to 777ha. However, if the invasion extent is poorly known, then attempting eradication is only cost-effective for infestations estimated as 296ha or smaller. The value of learning is greatest (maximum of 8% saving) when infestation extent is poorly known and if it is close to the maximum extent for which attempting eradication is optimal. The optimal management action is most sensitive to the probability that the action succeeds (which depends on the extent), with the discount rate and cost of management also important, but spread rate less so. Over a 20-year time-horizon, attempting to eradicate A. paradoxa from South Africa is predicted to cost on average ZAR 8 million if the extent is known, and if our current estimate is poor, ZAR 33.6 million as opposed to ZAR 32.8 million for attempting containment. Main conclusions Our framework evaluates the cost-effectiveness of attempting eradication or containment of an invasive population that takes uncertainty in population extent into account. We show that incorporating uncertainty in the analysis avoids overly optimistic beliefs about the effectiveness of management enabling better management decisions. For A. paradoxa in South Africa, attempting to eradicate is likely to be cost-effective, particularly if resources are allocated to better understand and improve management efficacy. ?? 2011 Blackwell Publishing Ltd.

  11. Effects of Heterogeneity and Uncertainties in Sources and Initial and Boundary Conditions on Spatiotemporal Variations of Groundwater Levels

    NASA Astrophysics Data System (ADS)

    Zhang, Y. K.; Liang, X.

    2014-12-01

    Effects of aquifer heterogeneity and uncertainties in source/sink, and initial and boundary conditions in a groundwater flow model on the spatiotemporal variations of groundwater level, h(x,t), were investigated. Analytical solutions for the variance and covariance of h(x, t) in an unconfined aquifer described by a linearized Boussinesq equation with a white noise source/sink and a random transmissivity field were derived. It was found that in a typical aquifer the error in h(x,t) in early time is mainly caused by the random initial condition and the error reduces as time goes to reach a constant error in later time. The duration during which the effect of the random initial condition is significant may last a few hundred days in most aquifers. The constant error in groundwater in later time is due to the combined effects of the uncertain source/sink and flux boundary: the closer to the flux boundary, the larger the error. The error caused by the uncertain head boundary is limited in a narrow zone near the boundary but it remains more or less constant over time. The effect of the heterogeneity is to increase the variation of groundwater level and the maximum effect occurs close to the constant head boundary because of the linear mean hydraulic gradient. The correlation of groundwater level decreases with temporal interval and spatial distance. In addition, the heterogeneity enhances the correlation of groundwater level, especially at larger time intervals and small spatial distances.

  12. Numerical Experimentation with Maximum Likelihood Identification in Static Distributed Systems

    NASA Technical Reports Server (NTRS)

    Scheid, R. E., Jr.; Rodriguez, G.

    1985-01-01

    Many important issues in the control of large space structures are intimately related to the fundamental problem of parameter identification. One might also ask how well this identification process can be carried out in the presence of noisy data since no sensor system is perfect. With these considerations in mind the algorithms herein are designed to treat both the case of uncertainties in the modeling and uncertainties in the data. The analytical aspects of maximum likelihood identification are considered in some detail in another paper. The questions relevant to the implementation of these schemes are dealt with, particularly as they apply to models of large space structures. The emphasis is on the influence of the infinite dimensional character of the problem on finite dimensional implementations of the algorithms. Those areas of current and future analysis are highlighted which indicate the interplay between error analysis and possible truncations of the state and parameter spaces.

  13. A spatiotemporal dengue fever early warning model accounting for nonlinear associations with meteorological factors: a Bayesian maximum entropy approach

    NASA Astrophysics Data System (ADS)

    Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang

    2014-05-01

    Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.

  14. SU-E-T-657: Quantitative Assessment of Plan Robustness for Helical Tomotherapy for Head and Neck Cancer Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matney, J; Lian, J; Chera, B

    2015-06-15

    Introduction: Geometric uncertainties in daily patient setup can lead to variations in the planned dose, especially when using highly conformal techniques such as helical Tomotherapy. To account for the potential effect of geometric uncertainty, our clinical practice is to expand critical structures by 3mm expansion into planning risk volumes (PRV). The PRV concept assumes the spatial dose cloud is insensitive to patient positioning. However, no tools currently exist to determine if a Tomotherapy plan is robust to the effects of daily setup variation. We objectively quantified the impact of geometric uncertainties on the 3D doses to critical normal tissues duringmore » helical Tomotherapy. Methods: Using a Matlab-based program created and validated by Accuray (Madison, WI), the planned Tomotherapy delivery sinogram recalculated dose on shifted CT datasets. Ten head and neck patients were selected for analysis. To simulate setup uncertainty, the patient anatomy was shifted ±3mm in the longitudinal, lateral and vertical axes. For each potential shift, the recalculated doses to various critical normal tissues were compared to the doses delivered to the PRV in the original plan Results: 18 shifted scenarios created from Tomotherapy plans for three patients with head and neck cancers were analyzed. For all simulated setup errors, the maximum doses to the brainstem, spinal cord, parotids and cochlea were no greater than 0.6Gy of the respective original PRV maximum. Despite 3mm setup shifts, the minimum dose delivered to 95% of the CTVs and PTVs were always within 0.4Gy of the original plan. Conclusions: For head and neck sites treated with Tomotherapy, the use of a 3mm PRV expansion provide a reasonable estimate of the dosimetric effects of 3mm setup uncertainties. Similarly, target coverage appears minimally effected by a 3mm setup uncertainty. Data from a larger number of patients will be presented. Future work will include other anatomical sites.« less

  15. Future changes over the Himalayas: Maximum and minimum temperature

    NASA Astrophysics Data System (ADS)

    Dimri, A. P.; Kumar, D.; Choudhary, A.; Maharana, P.

    2018-03-01

    An assessment of the projection of minimum and maximum air temperature over the Indian Himalayan region (IHR) from the COordinated Regional Climate Downscaling EXperiment- South Asia (hereafter, CORDEX-SA) regional climate model (RCM) experiments have been carried out under two different Representative Concentration Pathway (RCP) scenarios. The major aim of this study is to assess the probable future changes in the minimum and maximum climatology and its long-term trend under different RCPs along with the elevation dependent warming over the IHR. A number of statistical analysis such as changes in mean climatology, long-term spatial trend and probability distribution function are carried out to detect the signals of changes in climate. The study also tries to quantify the uncertainties associated with different model experiments and their ensemble in space, time and for different seasons. The model experiments and their ensemble show prominent cold bias over Himalayas for present climate. However, statistically significant higher warming rate (0.23-0.52 °C/decade) for both minimum and maximum air temperature (Tmin and Tmax) is observed for all the seasons under both RCPs. The rate of warming intensifies with the increase in the radiative forcing under a range of greenhouse gas scenarios starting from RCP4.5 to RCP8.5. In addition to this, a wide range of spatial variability and disagreements in the magnitude of trend between different models describes the uncertainty associated with the model projections and scenarios. The projected rate of increase of Tmin may destabilize the snow formation at the higher altitudes in the northern and western parts of Himalayan region, while rising trend of Tmax over southern flank may effectively melt more snow cover. Such combined effect of rising trend of Tmin and Tmax may pose a potential threat to the glacial deposits. The overall trend of Diurnal temperature range (DTR) portrays increasing trend across entire area with highest magnitude under RCP8.5. This higher rate of increase is imparted from the predominant rise of Tmax as compared to Tmin.

  16. Aerothermodynamic Design of the Mars Science Laboratory Heatshield

    NASA Technical Reports Server (NTRS)

    Edquist, Karl T.; Dyakonov, Artem A.; Wright, Michael J.; Tang, Chun Y.

    2009-01-01

    Aerothermodynamic design environments are presented for the Mars Science Laboratory entry capsule heatshield. The design conditions are based on Navier-Stokes flowfield simulations on shallow (maximum total heat load) and steep (maximum heat flux, shear stress, and pressure) entry trajectories from a 2009 launch. Boundary layer transition is expected prior to peak heat flux, a first for Mars entry, and the heatshield environments were defined for a fully-turbulent heat pulse. The effects of distributed surface roughness on turbulent heat flux and shear stress peaks are included using empirical correlations. Additional biases and uncertainties are based on computational model comparisons with experimental data and sensitivity studies. The peak design conditions are 197 W/sq cm for heat flux, 471 Pa for shear stress, 0.371 Earth atm for pressure, and 5477 J/sq cm for total heat load. Time-varying conditions at fixed heatshield locations were generated for thermal protection system analysis and flight instrumentation development. Finally, the aerothermodynamic effects of delaying launch until 2011 are previewed.

  17. Analysis of Jupiter's Oval BA: A Streamlined Approach

    NASA Technical Reports Server (NTRS)

    Sussman, Michael G.; Chanover, Nancy J.; Simon-Miller, Amy A.; Vasavada, Ashwin R.; Beebe, Reta F.

    2010-01-01

    We present a novel method of constructing streamlines to derive wind speeds within jovian vortices and demonstrate its application to Oval BA for 2001 pre-reddened Cassini flyby data, 2007 post-reddened New Horizons flyby data, and 1998 Galileo data of precursor Oval DE. Our method, while automated, attempts to combine the advantages of both automated and manual cloud tracking methods. The southern maximum wind speed of Oval BA does not show significant changes between these data sets to within our measurement uncertainty. The northern maximum dries appear to have increased in strength during this time interval, tvhich likely correlates with the oval's return to a symmetric shape. We demonstrate how the use of closed streamlines can provide measurements of vorticity averaged over the encircled area with no a priori assumptions concerning oval shape. We find increased averaged interior vorticity between pre- and post-reddened Oval BA, with the precursor Oval DE occupying a middle value of vorticity between these two.

  18. Estimating contaminant loads in rivers: An application of adjusted maximum likelihood to type 1 censored data

    USGS Publications Warehouse

    Cohn, Timothy A.

    2005-01-01

    This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored‐data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet‐Cramér‐Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real‐time water quality monitoring.

  19. Are satellite products good proxies for gauge precipitation over Singapore?

    NASA Astrophysics Data System (ADS)

    Hur, Jina; Raghavan, Srivatsan V.; Nguyen, Ngoc Son; Liong, Shie-Yui

    2018-05-01

    The uncertainties in two high-resolution satellite precipitation products (TRMM 3B42 v7.0 and GSMaP v5.222) were investigated by comparing them against rain gauge observations over Singapore on sub-daily scales. The satellite-borne precipitation products are assessed in terms of seasonal, monthly and daily variations, the diurnal cycle, and extreme precipitation over a 10-year period (2000-2010). Results indicate that the uncertainties in extreme precipitation is higher in GSMaP than in TRMM, possibly due to the issues such as satellite merging algorithm, the finer spatio-temporal scale of high intensity precipitation, and the swath time of satellite. Such discrepancies between satellite-borne and gauge-based precipitations at sub-daily scale can possibly lead to distorting analysis of precipitation characteristics and/or application model results. Overall, both satellite products are unable to capture the observed extremes and provide a good agreement with observations only at coarse time scales. Also, the satellite products agree well on the late afternoon maximum and heavier rainfall of gauge-based data in winter season when the Intertropical Convergence Zone (ITCZ) is located over Singapore. However, they do not reproduce the gauge-observed diurnal cycle in summer. The disagreement in summer could be attributed to the dominant satellite overpass time (about 14:00 SGT) later than the diurnal peak time (about 09:00 SGT) of gauge precipitation. From the analyses of extreme precipitation indices, it is inferred that both satellite datasets tend to overestimate the light rain and frequency but underestimate high intensity precipitation and the length of dry spells. This study on quantification of their uncertainty is useful in many aspects especially that these satellite products stand scrutiny over places where there are no good ground data to be compared against. This has serious implications on climate studies as in model evaluations and in particular, climate model simulated future projections, when information on precipitation extremes need to be reliable as they are highly crucial for adaptation and mitigation.

  20. Observations of Rosetta Target (21) Lutetia with Keck and Gemini Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Conrad, A. R.; Merline, W. J.; Drummond, J.; Carry, B.; Tamblyn, P. M.; Chapman, C. R.; Dumas, C.; Weaver, H. A.

    2009-12-01

    In support of the NASA/ESA Rosetta mission’s plans to observe asteroid (21) Lutetia during a 2010 July flyby, and in conjunction with a larger ground-based plus HST campaign to support this mission, we observed Lutetia from Keck and Gemini-North during several nights spanning 2008 Oct through 2009 Jan. Observations were made using adaptive optics in the near-IR, primarily at K-band (2.1 micron), and were timed to coincide with the asteroid's most recent opposition at a distance of about 1.4 AU. From these data, we determined Lutetia’s triaxial size and shape to be 132 x 101 x 76 km, with maximum expected uncertainties of 4 x 3 x 31 km. The spin pole is found to be at (RA, Dec) = (48, +9) deg or ecliptic (long, lat) = (49,-8) deg, with a formal uncertainty radius (not including systematics) of 3 deg. We have calibrated our technique of deriving dimensions of asteroids from AO images against Pluto and 4 satellites of Saturn with accurate diameters, and we expect that our systematics (included in the size uncertainties above) are no more than 3%. We also searched for satellites and our preliminary results indicate no detection of a satellite larger than about 1 km over a significant fraction of the Hill sphere (10-240 asteroid radii). Improved limits are expected from a more refined analysis. We are grateful for telescope time made available to us by S. Kulkarni and M. Busch (Cal Tech) for a portion of this dataset. We also thank our collaborators on Team Keck, the Keck science staff, for making possible some of these observations and for observing time granted at Gemini under NOAO time allocation. Plane-of-sky short and long axes of (21) Lutetia taken from Keck AO images on 2008 Dec 2.

  1. Effect of Streamflow Forecast Uncertainty on Real-Time Reservoir Operation

    NASA Astrophysics Data System (ADS)

    Zhao, T.; Cai, X.; Yang, D.

    2010-12-01

    Various hydrological forecast products have been applied to real-time reservoir operation, including deterministic streamflow forecast (DSF), DSF-based probabilistic streamflow forecast (DPSF), and ensemble streamflow forecast (ESF), which represent forecast uncertainty in the form of deterministic forecast error, deterministic forecast error-based uncertainty distribution, and ensemble forecast errors, respectively. Compared to previous studies that treat these forecast products as ad hoc inputs for reservoir operation models, this paper attempts to model the uncertainties involved in the various forecast products and explores their effect on real-time reservoir operation decisions. In hydrology, there are various indices reflecting the magnitude of streamflow forecast uncertainty; meanwhile, few models illustrate the forecast uncertainty evolution process. This research introduces Martingale Model of Forecast Evolution (MMFE) from supply chain management and justifies its assumptions for quantifying the evolution of uncertainty in streamflow forecast as time progresses. Based on MMFE, this research simulates the evolution of forecast uncertainty in DSF, DPSF, and ESF, and applies the reservoir operation models (dynamic programming, DP; stochastic dynamic programming, SDP; and standard operation policy, SOP) to assess the effect of different forms of forecast uncertainty on real-time reservoir operation. Through a hypothetical single-objective real-time reservoir operation model, the results illustrate that forecast uncertainty exerts significant effects. Reservoir operation efficiency, as measured by a utility function, decreases as the forecast uncertainty increases. Meanwhile, these effects also depend on the type of forecast product being used. In general, the utility of reservoir operation with ESF is nearly as high as the utility obtained with a perfect forecast; the utilities of DSF and DPSF are similar to each other but not as efficient as ESF. Moreover, streamflow variability and reservoir capacity can change the magnitude of the effects of forecast uncertainty, but not the relative merit of DSF, DPSF, and ESF. Schematic diagram of the increase in forecast uncertainty with forecast lead-time and the dynamic updating property of real-time streamflow forecast

  2. Probabilistic Structural Evaluation of Uncertainties in Radiator Sandwich Panel Design

    NASA Technical Reports Server (NTRS)

    Kuguoglu, Latife; Ludwiczak, Damian

    2006-01-01

    The Jupiter Icy Moons Orbiter (JIMO) Space System is part of the NASA's Prometheus Program. As part of the JIMO engineering team at NASA Glenn Research Center, the structural design of the JIMO Heat Rejection Subsystem (HRS) is evaluated. An initial goal of this study was to perform sensitivity analyses to determine the relative importance of the input variables on the structural responses of the radiator panel. The desire was to let the sensitivity analysis information identify the important parameters. The probabilistic analysis methods illustrated here support this objective. The probabilistic structural performance evaluation of a HRS radiator sandwich panel was performed. The radiator panel structural performance was assessed in the presence of uncertainties in the loading, fabrication process variables, and material properties. The stress and displacement contours of the deterministic structural analysis at mean probability was performed and results presented. It is followed by a probabilistic evaluation to determine the effect of the primitive variables on the radiator panel structural performance. Based on uncertainties in material properties, structural geometry and loading, the results of the displacement and stress analysis are used as an input file for the probabilistic analysis of the panel. The sensitivity of the structural responses, such as maximum displacement and maximum tensile and compressive stresses of the facesheet in x and y directions and maximum VonMises stresses of the tube, to the loading and design variables is determined under the boundary condition where all edges of the radiator panel are pinned. Based on this study, design critical material and geometric parameters of the considered sandwich panel are identified.

  3. Evaluation of Parameter Uncertainty Reduction in Groundwater Flow Modeling Using Multiple Environmental Tracers

    NASA Astrophysics Data System (ADS)

    Arnold, B. W.; Gardner, P.

    2013-12-01

    Calibration of groundwater flow models for the purpose of evaluating flow and aquifer heterogeneity typically uses observations of hydraulic head in wells and appropriate boundary conditions. Environmental tracers have a wide variety of decay rates and input signals in recharge, resulting in a potentially broad source of additional information to constrain flow rates and heterogeneity. A numerical study was conducted to evaluate the reduction in uncertainty during model calibration using observations of various environmental tracers and combinations of tracers. A synthetic data set was constructed by simulating steady groundwater flow and transient tracer transport in a high-resolution, 2-D aquifer with heterogeneous permeability and porosity using the PFLOTRAN software code. Data on pressure and tracer concentration were extracted at well locations and then used as observations for automated calibration of a flow and transport model using the pilot point method and the PEST code. Optimization runs were performed to estimate parameter values of permeability at 30 pilot points in the model domain for cases using 42 observations of: 1) pressure, 2) pressure and CFC11 concentrations, 3) pressure and Ar-39 concentrations, and 4) pressure, CFC11, Ar-39, tritium, and He-3 concentrations. Results show significantly lower uncertainty, as indicated by the 95% linear confidence intervals, in permeability values at the pilot points for cases including observations of environmental tracer concentrations. The average linear uncertainty range for permeability at the pilot points using pressure observations alone is 4.6 orders of magnitude, using pressure and CFC11 concentrations is 1.6 orders of magnitude, using pressure and Ar-39 concentrations is 0.9 order of magnitude, and using pressure, CFC11, Ar-39, tritium, and He-3 concentrations is 1.0 order of magnitude. Data on Ar-39 concentrations result in the greatest parameter uncertainty reduction because its half-life of 269 years is similar to the range of transport times (hundreds to thousands of years) in the heterogeneous synthetic aquifer domain. The slightly higher uncertainty range for the case using all of the environmental tracers simultaneously is probably due to structural errors in the model introduced by the pilot point regularization scheme. It is concluded that maximum information and uncertainty reduction for constraining a groundwater flow model is obtained using an environmental tracer whose half-life is well matched to the range of transport times through the groundwater flow system. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  4. INTERCOMPARISON ON THE MEASUREMENT OF THE QUANTITY PERSONAL DOSE EQUIVALENT HP(10) IN PHOTON FIELDS. LINEARITY DEPENDENCE, LOWER LIMIT OF DETECTION AND UNCERTAINTY IN MEASUREMENT OF DOSIMETRY SYSTEMS OF INDIVIDUAL MONITORING SERVICES IN GABON AND GHANA.

    PubMed

    Ondo Meye, P; Schandorf, C; Amoako, J K; Manteaw, P O; Amoatey, E A; Adjei, D N

    2017-12-01

    An inter-comparison study was conducted to assess the capability of dosimetry systems of individual monitoring services (IMSs) in Gabon and Ghana to measure personal dose equivalent Hp(10) in photon fields. The performance indicators assessed were the lower limit of detection, linearity and uncertainty in measurement. Monthly and quarterly recording levels were proposed with corresponding values of 0.08 and 0.025 mSv, and 0.05 and 0.15 mSv for the TLD and OSL systems, respectively. The linearity dependence of the dosimetry systems was performed following the requirement given in the Standard IEC 62387 of the International Electrotechnical Commission (IEC). The results obtained for the two systems were satisfactory. The procedure followed for the uncertainty assessment is the one given in the IEC technical report TR62461. The maximum relative overall uncertainties, in absolute value, expressed in terms of Hp(10), for the TL dosimetry system Harshaw 6600, are 44. 35% for true doses below 0.40 mSv and 36.33% for true doses ≥0.40 mSv. For the OSL dosimetry system microStar, the maximum relative overall uncertainties, in absolute value, are 52.17% for true doses below 0.40 mSv and 37.43% for true doses ≥0.40 mSv. These results are in good agreement with the requirements for accuracy of the International Commission on Radiological protection. When expressing the uncertainties in terms of response, comparison with the IAEA requirements for overall accuracy showed that the uncertainty results were also acceptable. The values of Hp(10) directly measured by the two dosimetry systems showed a significant underestimation for the Harshaw 6600 system, and a slight overestimation for the microStar system. After correction for linearity of the measured doses, the two dosimetry systems gave better and comparable results. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Quantifying the sources of uncertainty in an ensemble of hydrological climate-impact projections

    NASA Astrophysics Data System (ADS)

    Aryal, Anil; Shrestha, Sangam; Babel, Mukand S.

    2018-01-01

    The objective of this paper is to quantify the various sources of uncertainty in the assessment of climate change impact on hydrology in the Tamakoshi River Basin, located in the north-eastern part of Nepal. Multiple climate and hydrological models were used to simulate future climate conditions and discharge in the basin. The simulated results of future climate and river discharge were analysed for the quantification of sources of uncertainty using two-way and three-way ANOVA. The results showed that temperature and precipitation in the study area are projected to change in near- (2010-2039), mid- (2040-2069) and far-future (2070-2099) periods. Maximum temperature is likely to rise by 1.75 °C under Representative Concentration Pathway (RCP) 4.5 and by 3.52 °C under RCP 8.5. Similarly, the minimum temperature is expected to rise by 2.10 °C under RCP 4.5 and by 3.73 °C under RCP 8.5 by the end of the twenty-first century. Similarly, the precipitation in the study area is expected to change by - 2.15% under RCP 4.5 and - 2.44% under RCP 8.5 scenarios. The future discharge in the study area was projected using two hydrological models, viz. Soil and Water Assessment Tool (SWAT) and Hydrologic Engineering Center's Hydrologic Modelling System (HEC-HMS). The SWAT model projected discharge is expected to change by small amount, whereas HEC-HMS model projected considerably lower discharge in future compared to the baseline period. The results also show that future climate variables and river hydrology contain uncertainty due to the choice of climate models, RCP scenarios, bias correction methods and hydrological models. During wet days, more uncertainty is observed due to the use of different climate models, whereas during dry days, the use of different hydrological models has a greater effect on uncertainty. Inter-comparison of the impacts of different climate models reveals that the REMO climate model shows higher uncertainty in the prediction of precipitation and, consequently, in the prediction of future discharge and maximum probable flood.

  6. Using Four Downscaling Techniques to Characterize Uncertainty in Updating Intensity-Duration-Frequency Curves Under Climate Change

    NASA Astrophysics Data System (ADS)

    Cook, L. M.; Samaras, C.; McGinnis, S. A.

    2017-12-01

    Intensity-duration-frequency (IDF) curves are a common input to urban drainage design, and are used to represent extreme rainfall in a region. As rainfall patterns shift into a non-stationary regime as a result of climate change, these curves will need to be updated with future projections of extreme precipitation. Many regions have begun to update these curves to reflect the trends from downscaled climate models; however, few studies have compared the methods for doing so, as well as the uncertainty that results from the selection of the native grid scale and temporal resolution of the climate model. This study examines the variability in updated IDF curves for Pittsburgh using four different methods for adjusting gridded regional climate model (RCM) outputs into station scale precipitation extremes: (1) a simple change factor applied to observed return levels, (2) a naïve adjustment of stationary and non-stationary Generalized Extreme Value (GEV) distribution parameters, (3) a transfer function of the GEV parameters from the annual maximum series, and (4) kernel density distribution mapping bias correction of the RCM time series. Return level estimates (rainfall intensities) and confidence intervals from these methods for the 1-hour to 48-hour duration are tested for sensitivity to the underlying spatial and temporal resolution of the climate ensemble from the NA-CORDEX project, as well as, the future time period for updating. The first goal is to determine if uncertainty is highest for: (i) the downscaling method, (ii) the climate model resolution, (iii) the climate model simulation, (iv) the GEV parameters, or (v) the future time period examined. Initial results of the 6-hour, 10-year return level adjusted with the simple change factor method using four climate model simulations of two different spatial resolutions show that uncertainty is highest in the estimation of the GEV parameters. The second goal is to determine if complex downscaling methods and high-resolution climate models are necessary for updating, or if simpler methods and lower resolution climate models will suffice. The final results can be used to inform the most appropriate method and climate model resolutions to use for updating IDF curves for urban drainage design.

  7. Probabilistic assessment of the impact of coal seam gas development on groundwater: Surat Basin, Australia

    NASA Astrophysics Data System (ADS)

    Cui, Tao; Moore, Catherine; Raiber, Matthias

    2018-05-01

    Modelling cumulative impacts of basin-scale coal seam gas (CSG) extraction is challenging due to the long time frames and spatial extent over which impacts occur combined with the need to consider local-scale processes. The computational burden of such models limits the ability to undertake calibration and sensitivity and uncertainty analyses. A framework is presented that integrates recently developed methods and tools to address the computational burdens of an assessment of drawdown impacts associated with rapid CSG development in the Surat Basin, Australia. The null space Monte Carlo method combined with singular value decomposition (SVD)-assisted regularisation was used to analyse the uncertainty of simulated drawdown impacts. The study also describes how the computational burden of assessing local-scale impacts was mitigated by adopting a novel combination of a nested modelling framework which incorporated a model emulator of drawdown in dual-phase flow conditions, and a methodology for representing local faulting. This combination provides a mechanism to support more reliable estimates of regional CSG-related drawdown predictions. The study indicates that uncertainties associated with boundary conditions are reduced significantly when expressing differences between scenarios. The results are analysed and distilled to enable the easy identification of areas where the simulated maximum drawdown impacts could exceed trigger points associated with legislative `make good' requirements; trigger points require that either an adjustment in the development scheme or other measures are implemented to remediate the impact. This report contributes to the currently small body of work that describes modelling and uncertainty analyses of CSG extraction impacts on groundwater.

  8. Performance evaluation of a smart buffer control at a wastewater treatment plant.

    PubMed

    van Daal-Rombouts, P; Benedetti, L; de Jonge, J; Weijers, S; Langeveld, J

    2017-11-15

    Real time control (RTC) is increasingly seen as a viable method to optimise the functioning of wastewater systems. Model exercises and case studies reported in literature claim a positive impact of RTC based on results without uncertainty analysis and flawed evaluation periods. This paper describes two integrated RTC strategies at the wastewater treatment plant (WWTP) Eindhoven, the Netherlands, that aim to improve the use of the available tanks at the WWTP and storage in the contributing catchments to reduce the impact on the receiving water. For the first time it is demonstrated that a significant improvement can be achieved through the application of RTC in practice. The Storm Tank Control is evaluated based on measurements and reduces the number of storm water settling tank discharges by 44% and the discharged volume by an estimated 33%, decreasing dissolved oxygen depletion in the river. The Primary Clarifier Control is evaluated based on model simulations. The maximum event NH4 concentration in the effluent reduced on average 19% for large events, while the load reduced 20%. For all 31 events the reductions are 11 and 4% respectively. Reductions are significant taking uncertainties into account, while using representative evaluation periods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Impacts of Process and Prediction Uncertainties on Projected Hanford Waste Glass Amount

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gervasio, V.; Kim, D. S.; Vienna, J. D.

    Analyses were performed to evaluate the impacts of using the advanced glass models, constraints, and uncertainty descriptions on projected Hanford glass mass. The maximum allowable waste oxide loading (WOL) was estimated for waste compositions while simultaneously satisfying all applicable glass property and composition constraints with sufficient confidence. Different components of prediction and composition/process uncertainties were systematically included in the calculations to evaluate their impacts on glass mass. The analyses estimated the production of 23,360 MT of immobilized high-level waste (IHLW) glass when no uncertainties were taken into account. Accounting for prediction and composition/process uncertainties resulted in 5.01 relative percent increasemore » in estimated glass mass of 24,531 MT. Roughly equal impacts were found for prediction uncertainties (2.58 RPD) and composition/process uncertainties (2.43 RPD). The immobilized low-activity waste (ILAW) mass was predicted to be 282,350 MT without uncertainty and with waste loading “line” rules in place. Accounting for prediction and composition/process uncertainties resulted in only 0.08 relative percent increase in estimated glass mass of 282,562 MT. Without application of line rules the glass mass decreases by 10.6 relative percent (252,490 MT) for the case with no uncertainties. Addition of prediction uncertainties increases glass mass by 1.32 relative percent and the addition of composition/process uncertainties increase glass mass by an additional 7.73 relative percent (9.06 relative percent increase combined). The glass mass estimate without line rules (275,359 MT) was 2.55 relative percent lower than that with the line rules (282,562 MT), after accounting for all applicable uncertainties.« less

  10. Accounting for downscaling and model uncertainty in fine-resolution seasonal climate projections over the Columbia River Basin

    NASA Astrophysics Data System (ADS)

    Ahmadalipour, Ali; Moradkhani, Hamid; Rana, Arun

    2018-01-01

    Climate change is expected to have severe impacts on natural systems as well as various socio-economic aspects of human life. This has urged scientific communities to improve the understanding of future climate and reduce the uncertainties associated with projections. In the present study, ten statistically downscaled CMIP5 GCMs at 1/16th deg. spatial resolution from two different downscaling procedures are utilized over the Columbia River Basin (CRB) to assess the changes in climate variables and characterize the associated uncertainties. Three climate variables, i.e. precipitation, maximum temperature, and minimum temperature, are studied for the historical period of 1970-2000 as well as future period of 2010-2099, simulated with representative concentration pathways of RCP4.5 and RCP8.5. Bayesian Model Averaging (BMA) is employed to reduce the model uncertainty and develop a probabilistic projection for each variable in each scenario. Historical comparison of long-term attributes of GCMs and observation suggests a more accurate representation for BMA than individual models. Furthermore, BMA projections are used to investigate future seasonal to annual changes of climate variables. Projections indicate significant increase in annual precipitation and temperature, with varied degree of change across different sub-basins of CRB. We then characterized uncertainty of future projections for each season over CRB. Results reveal that model uncertainty is the main source of uncertainty, among others. However, downscaling uncertainty considerably contributes to the total uncertainty of future projections, especially in summer. On the contrary, downscaling uncertainty appears to be higher than scenario uncertainty for precipitation.

  11. Objectively Quantifying Radiation Esophagitis With Novel Computed Tomography–Based Metrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niedzielski, Joshua S., E-mail: jsniedzielski@mdanderson.org; University of Texas Houston Graduate School of Biomedical Science, Houston, Texas; Yang, Jinzhong

    Purpose: To study radiation-induced esophageal expansion as an objective measure of radiation esophagitis in patients with non-small cell lung cancer (NSCLC) treated with intensity modulated radiation therapy. Methods and Materials: Eighty-five patients had weekly intra-treatment CT imaging and esophagitis scoring according to Common Terminlogy Criteria for Adverse Events 4.0, (24 Grade 0, 45 Grade 2, and 16 Grade 3). Nineteen esophageal expansion metrics based on mean, maximum, spatial length, and volume of expansion were calculated as voxel-based relative volume change, using the Jacobian determinant from deformable image registration between the planning and weekly CTs. An anatomic variability correction method wasmore » validated and applied to these metrics to reduce uncertainty. An analysis of expansion metrics and radiation esophagitis grade was conducted using normal tissue complication probability from univariate logistic regression and Spearman rank for grade 2 and grade 3 esophagitis endpoints, as well as the timing of expansion and esophagitis grade. Metrics' performance in classifying esophagitis was tested with receiver operating characteristic analysis. Results: Expansion increased with esophagitis grade. Thirteen of 19 expansion metrics had receiver operating characteristic area under the curve values >0.80 for both grade 2 and grade 3 esophagitis endpoints, with the highest performance from maximum axial expansion (MaxExp1) and esophageal length with axial expansion ≥30% (LenExp30%) with area under the curve values of 0.93 and 0.91 for grade 2, 0.90 and 0.90 for grade 3 esophagitis, respectively. Conclusions: Esophageal expansion may be a suitable objective measure of esophagitis, particularly maximum axial esophageal expansion and esophageal length with axial expansion ≥30%, with 2.1 Jacobian value and 98.6 mm as the metric value for 50% probability of grade 3 esophagitis. The uncertainty in esophageal Jacobian calculations can be reduced with anatomic correction methods.« less

  12. Mixed H2/H Infinity Optimization with Multiple H Infinity Constraints

    DTIC Science & Technology

    1994-06-01

    given by (w = 1P I Ijwj, !5 1); p = 2900 The 2-norm is the energy, and the c-norm is the maximum magnitude of the signal. A good measure of performance is...the system 2-norm is not good for uncertainty management)] is conservative, especially when the uncertainty model is highly structured. In this case, g...57.6035 T [-6.4183, 3.6504] ±30.2811 Although the objective was to design a pure regulator, from Table 5-1 we see that the H2 controller provides good

  13. Assessing the Responses of Streamflow to Pollution Release in South Carolina

    NASA Astrophysics Data System (ADS)

    Maze, G.; Chovancak, N. A.; Samadi, S. Z.

    2017-12-01

    The purpose of this investigation was to examine the effects of various stream flows on the transport of a pollutant downstream and to evaluate the uncertainty associated with using a single stream flow value when the true flow is unknown in the model. The area used for this study was Horse Creek in South Carolina where a chlorine pollutant spill has occurred in the past resulting from a train derailment in Graniteville, SC. In the example scenario used, the chlorine gas pollutant was released into the environment, where it killed plants, infected groundwater, and caused evacuation of the city. Tracking the movement and concentrations at various points downstream in the river system is crucial to understanding how a single accidental pollutant release can affect the surrounding areas. As a result of the lack of real-time data available this emergency response model uses historical monthly averages, however, these monthly averages do not reflect how widely the flow can vary within that month. Therefore, the assumption to use the historical monthly average flow data may not be accurate, and this investigation aims at quantifying the uncertainty associated with using a single stream flow value when the true stream flow may vary greatly. For the purpose of this investigation, the event in Graniteville was used as a case study to evaluate the emergency response model. This investigation was conducted by adjusting the STREAM II V7 program developed by Savannah River National Laboratory (SRNL) to model a confluence at the Horse Creek and the Savannah River system. This adjusted program was utilized to track the progress of the chlorine pollutant release and examine how it was transported downstream. By adjusting this program, the concentrations and time taken to reach various points downstream of the release were obtained and can be used not only to analyze this particular pollutant release in Graniteville, but can continue to be adjusted and used as a technical tool for emergency responders in future accidents. Further, the program was run with monthly maximum, minimum, and average advective flows and an uncertainty analysis was conducted to examine the error associated with the input data. These results underscore to profound influence that streamflow magnitudes (maximum, minimum, and average) have on shaping downstream water quality.

  14. Engineering design constraints of the lunar surface environment

    NASA Technical Reports Server (NTRS)

    Morrison, D. A.

    1992-01-01

    Living and working on the lunar surface will be difficult. Design of habitats, machines, tools, and operational scenarios in order to allow maximum flexibility in human activity will require paying attention to certain constraints imposed by conditions at the surface and the characteristics of lunar material. Primary design drivers for habitat, crew health and safety, and crew equipment are: ionizing radiation, the meteoroid flux, and the thermal environment. Secondary constraints for engineering derive from: the physical and chemical properties of lunar surface materials, rock distributions and regolith thicknesses, topography, electromagnetic properties, and seismicity. Protection from ionizing radiation is essential for crew health and safety. The total dose acquired by a crew member will be the sum of the dose acquired during EVA time (when shielding will be least) plus the dose acquired during time spent in the habitat (when shielding will be maximum). Minimizing the dose acquired in the habitat extends the time allowable for EVA's before a dose limit is reached. Habitat shielding is enabling, and higher precision in predicting secondary fluxes produced in shielding material would be desirable. Means for minimizing dose during a solar flare event while on extended EVA will be essential. Early warning of the onset of flare activity (at least a half-hour is feasible) will dictate the time available to take mitigating steps. Warning capability affects design of rovers (or rover tools) and site layout. Uncertainty in solar flare timing is a design constraint that points to the need for quickly accessible or constructible safe havens.

  15. Engineering design constraints of the lunar surface environment

    NASA Astrophysics Data System (ADS)

    Morrison, D. A.

    1992-02-01

    Living and working on the lunar surface will be difficult. Design of habitats, machines, tools, and operational scenarios in order to allow maximum flexibility in human activity will require paying attention to certain constraints imposed by conditions at the surface and the characteristics of lunar material. Primary design drivers for habitat, crew health and safety, and crew equipment are: ionizing radiation, the meteoroid flux, and the thermal environment. Secondary constraints for engineering derive from: the physical and chemical properties of lunar surface materials, rock distributions and regolith thicknesses, topography, electromagnetic properties, and seismicity. Protection from ionizing radiation is essential for crew health and safety. The total dose acquired by a crew member will be the sum of the dose acquired during EVA time (when shielding will be least) plus the dose acquired during time spent in the habitat (when shielding will be maximum). Minimizing the dose acquired in the habitat extends the time allowable for EVA's before a dose limit is reached. Habitat shielding is enabling, and higher precision in predicting secondary fluxes produced in shielding material would be desirable. Means for minimizing dose during a solar flare event while on extended EVA will be essential. Early warning of the onset of flare activity (at least a half-hour is feasible) will dictate the time available to take mitigating steps. Warning capability affects design of rovers (or rover tools) and site layout. Uncertainty in solar flare timing is a design constraint that points to the need for quickly accessible or constructible safe havens.

  16. To what extent does variability of historical rainfall series influence extreme event statistics of sewer system surcharge and overflows?

    PubMed

    Schaarup-Jensen, K; Rasmussen, M R; Thorndahl, S

    2009-01-01

    In urban drainage modelling long-term extreme statistics has become an important basis for decision-making e.g. in connection with renovation projects. Therefore it is of great importance to minimize the uncertainties with regards to long-term prediction of maximum water levels and combined sewer overflow (CSO) in drainage systems. These uncertainties originate from large uncertainties regarding rainfall inputs, parameters, and assessment of return periods. This paper investigates how the choice of rainfall time series influences the extreme events statistics of max water levels in manholes and CSO volumes. Traditionally, long-term rainfall series, from a local rain gauge, are unavailable. In the present case study, however, long and local rain series are available. 2 rainfall gauges have recorded events for approximately 9 years at 2 locations within the catchment. Beside these 2 gauges another 7 gauges are located at a distance of max 20 kilometers from the catchment. All gauges are included in the Danish national rain gauge system which was launched in 1976. The paper describes to what extent the extreme events statistics based on these 9 series diverge from each other and how this diversity can be handled, e.g. by introducing an "averaging procedure" based on the variability within the set of statistics. All simulations are performed by means of the MOUSE LTS model.

  17. Sample exchange by beam scanning with applications to noncollinear pump-probe spectroscopy at kilohertz repetition rates.

    PubMed

    Spencer, Austin P; Hill, Robert J; Peters, William K; Baranov, Dmitry; Cho, Byungmoon; Huerta-Viga, Adriana; Carollo, Alexa R; Curtis, Anna C; Jonas, David M

    2017-06-01

    In laser spectroscopy, high photon flux can perturb the sample away from thermal equilibrium, altering its spectroscopic properties. Here, we describe an optical beam scanning apparatus that minimizes repetitive sample excitation while providing shot-to-shot sample exchange for samples such as cryostats, films, and air-tight cuvettes. In this apparatus, the beam crossing point is moved within the focal plane inside the sample by scanning both tilt angles of a flat mirror. A space-filling spiral scan pattern was designed that efficiently utilizes the sample area and mirror scanning bandwidth. Scanning beams along a spiral path is shown to increase the average number of laser shots that can be sampled before a spot on the sample cell is resampled by the laser to ∼1700 (out of the maximum possible 2500 for the sample area and laser spot size) while ensuring minimal shot-to-shot spatial overlap. Both an all-refractive version and an all-reflective version of the apparatus are demonstrated. The beam scanning apparatus does not measurably alter the time delay (less than the 0.4 fs measurement uncertainty), the laser focal spot size (less than the 2 μm measurement uncertainty), or the beam overlap (less than the 3.3% measurement uncertainty), leading to pump-probe and autocorrelation signal transients that accurately characterize the equilibrium sample.

  18. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis.

    PubMed

    Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J

    2013-01-01

    Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.

  19. Addressing Uncertainty in Fecal Indicator Bacteria Dark Inactivation Rates

    EPA Science Inventory

    Fecal contamination is a leading cause of surface water quality degradation. Roughly 20% of all total maximum daily load assessments approved by the United States Environmental Protection Agency since 1995, for example, address water bodies with unacceptably high fecal indicator...

  20. Effects of directional uncertainty on visually-guided joystick pointing.

    PubMed

    Berryhill, Marian; Kveraga, Kestutis; Hughes, Howard C

    2005-02-01

    Reaction times generally follow the predictions of Hick's law as stimulus-response uncertainty increases, although notable exceptions include the oculomotor system. Saccadic and smooth pursuit eye movement reaction times are independent of stimulus-response uncertainty. Previous research showed that joystick pointing to targets, a motor analog of saccadic eye movements, is only modestly affected by increased stimulus-response uncertainty; however, a no-uncertainty condition (simple reaction time to 1 possible target) was not included. Here, we re-evaluate manual joystick pointing including a no-uncertainty condition. Analysis indicated simple joystick pointing reaction times were significantly faster than choice reaction times. Choice reaction times (2, 4, or 8 possible target locations) only slightly increased as the number of possible targets increased. These data suggest that, as with joystick tracking (a motor analog of smooth pursuit eye movements), joystick pointing is more closely approximated by a simple/choice step function than the log function predicted by Hick's law.

  1. Maximum predictive power and the superposition principle

    NASA Technical Reports Server (NTRS)

    Summhammer, Johann

    1994-01-01

    In quantum physics the direct observables are probabilities of events. We ask how observed probabilities must be combined to achieve what we call maximum predictive power. According to this concept the accuracy of a prediction must only depend on the number of runs whose data serve as input for the prediction. We transform each probability to an associated variable whose uncertainty interval depends only on the amount of data and strictly decreases with it. We find that for a probability which is a function of two other probabilities maximum predictive power is achieved when linearly summing their associated variables and transforming back to a probability. This recovers the quantum mechanical superposition principle.

  2. Fast automated analysis of strong gravitational lenses with convolutional neural networks.

    PubMed

    Hezaveh, Yashar D; Levasseur, Laurence Perreault; Marshall, Philip J

    2017-08-30

    Quantifying image distortions caused by strong gravitational lensing-the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures-and estimating the corresponding matter distribution of these structures (the 'gravitational lens') has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the 'singular isothermal ellipsoid' density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.

  3. An objective method to determine the probability distribution of the minimum apparent age of a sample of radio-isotopic dates

    NASA Astrophysics Data System (ADS)

    Ickert, R. B.; Mundil, R.

    2012-12-01

    Dateable minerals (especially zircon U-Pb) that crystallized at high temperatures but have been redeposited, pose both unique opportunities and challenges for geochronology. Although they have the potential to provide useful information on the depositional age of their host rocks, their relationship to the host is not always well constrained. For example, primary volcanic deposits will often have a lag time (time between eruption and deposition) that is smaller than can be resolved using radiometric techniques, and the age of eruption and of deposition will be coincident within uncertainty. Alternatively, ordinary clastic sedimentary rocks will usually have a long and variable lag time, even for the youngest minerals. Intermediate cases, for example moderately reworked volcanogenic material, will have a short, but unknown lag time. A compounding problem with U-Pb zircon is that the residence time of crystals in their host magma chamber (time between crystallization and eruption) can be high and is variable, even within the products of a single eruption. In cases where the lag and/or residence time suspected to be large relative to the precision of the date, a common objective is to determine the minimum age of a sample of dates, in order to constrain the maximum age of the deposition of the host rock. However, both the extraction of that age as well as assignment of a meaningful uncertainty is not straightforward. A number of ad hoc techniques have been employed in the literature, which may be appropriate for particular data sets or specific problems, but may yield biased or misleading results. Ludwig (2012) has developed an objective, statistically justified method for the determination of the distribution of the minimum age, but it has not been widely adopted. Here we extend this algorithm with a bootstrap (which can show the effect - if any - of the sampling distribution itself). This method has a number of desirable characteristics: It can incorporate all data points while being resistant to outliers, it utilizes the measurement uncertainties, and it does not require the assumption that any given cluster of data represents a single geological event. In brief, the technique generates a synthetic distribution from the input data by resampling with replacement (a bootstrap). Each resample is a random selection from a Gaussian distribution defined by the mean and uncertainty of the data point. For this distribution, the minimum value is calculated. This procedure is repeated many times (>1000) and a distribution of minimum values is generated, from which a confidence interval can be constructed. We demonstrate the application of this technique using natural and synthetic datasets, show the advantages and limitations, and relate it to other methods. We emphasize that this estimate remains strictly a minimum age - as with any other estimate that does not explicitly incorporate lag or residence time, it will not reflect a depositional age if the lag/residence time is larger than the uncertainty of the estimate. We recommend that this or similar techniques be considered by geochronologists. Ludwig, K.R., 2012. Isoplot 3.75, A geochronological toolkit for Microsoft Excel; Berkeley Geochronology Center Special Publication no. 5

  4. UDE-based control of variable-speed wind turbine systems

    NASA Astrophysics Data System (ADS)

    Ren, Beibei; Wang, Yeqin; Zhong, Qing-Chang

    2017-01-01

    In this paper, the control of a PMSG (permanent magnet synchronous generator)-based variable-speed wind turbine system with a back-to-back converter is considered. The uncertainty and disturbance estimator (UDE)-based control approach is applied to the regulation of the DC-link voltage and the control of the RSC (rotor-side converter) and the GSC (grid-side converter). For the rotor-side controller, the UDE-based vector control is developed for the RSC with PMSG control to facilitate the application of the MPPT (maximum power point tracking) algorithm for the maximum wind energy capture. For the grid-side controller, the UDE-based vector control is developed to control the GSC with the power reference generated by a UDE-based DC-link voltage controller. Compared with the conventional vector control, the UDE-based vector control can achieve reliable current decoupling control with fast response. Moreover, the UDE-based DC-link voltage regulation can achieve stable DC-link voltage under model uncertainties and external disturbances, e.g. wind speed variations. The effectiveness of the proposed UDE-based control approach is demonstrated through extensive simulation studies in the presence of coupled dynamics, model uncertainties and external disturbances under varying wind speeds. The UDE-based control is able to generate more energy, e.g. by 5% for the wind profile tested.

  5. A 16-year time series of 1 km AVHRR satellite data of the conterminous United States and Alaska

    USGS Publications Warehouse

    Eidenshink, Jeff

    2006-01-01

    The U.S. Geological Survey (USGS) has developed a 16-year time series of vegetation condition information for the conterminous United States and Alaska using 1 km Advanced Very High Resolution Radiometer (AVHRR) data. The AVHRR data have been processed using consistent methods that account for radiometric variability due to calibration uncertainty, the effects of the atmosphere on surface radiometric measurements obtained from wide field-of-view observations, and the geometric registration accuracy. The conterminous United States and Alaska data sets have an atmospheric correction for water vapor, ozone, and Rayleigh scattering and include a cloud mask derived using the Clouds from AVHRR (CLAVR) algorithm. In comparison with other AVHRR time series data sets, the conterminous United States and Alaska data are processed using similar techniques. The primary difference is that the conterminous United States and Alaska data are at 1 km resolution, while others are at 8 km resolution. The time series consists of weekly and biweekly maximum normalized difference vegetation index (NDVI) composites.

  6. Modeling Temporal Processes in Early Spacecraft Design: Application of Discrete-Event Simulations for Darpa's F6 Program

    NASA Technical Reports Server (NTRS)

    Dubos, Gregory F.; Cornford, Steven

    2012-01-01

    While the ability to model the state of a space system over time is essential during spacecraft operations, the use of time-based simulations remains rare in preliminary design. The absence of the time dimension in most traditional early design tools can however become a hurdle when designing complex systems whose development and operations can be disrupted by various events, such as delays or failures. As the value delivered by a space system is highly affected by such events, exploring the trade space for designs that yield the maximum value calls for the explicit modeling of time.This paper discusses the use of discrete-event models to simulate spacecraft development schedule as well as operational scenarios and on-orbit resources in the presence of uncertainty. It illustrates how such simulations can be utilized to support trade studies, through the example of a tool developed for DARPA's F6 program to assist the design of "fractionated spacecraft".

  7. Uncertainty in solid precipitation and snow depth prediction for Siberia using the Noah and Noah-MP land surface models

    NASA Astrophysics Data System (ADS)

    Suzuki, Kazuyoshi; Zupanski, Milija

    2018-01-01

    In this study, we investigate the uncertainties associated with land surface processes in an ensemble predication context. Specifically, we compare the uncertainties produced by a coupled atmosphere-land modeling system with two different land surface models, the Noah- MP land surface model (LSM) and the Noah LSM, by using the Maximum Likelihood Ensemble Filter (MLEF) data assimilation system as a platform for ensemble prediction. We carried out 24-hour prediction simulations in Siberia with 32 ensemble members beginning at 00:00 UTC on 5 March 2013. We then compared the model prediction uncertainty of snow depth and solid precipitation with observation-based research products and evaluated the standard deviation of the ensemble spread. The prediction skill and ensemble spread exhibited high positive correlation for both LSMs, indicating a realistic uncertainty estimation. The inclusion of a multiple snowlayer model in the Noah-MP LSM was beneficial for reducing the uncertainties of snow depth and snow depth change compared to the Noah LSM, but the uncertainty in daily solid precipitation showed minimal difference between the two LSMs. The impact of LSM choice in reducing temperature uncertainty was limited to surface layers of the atmosphere. In summary, we found that the more sophisticated Noah-MP LSM reduces uncertainties associated with land surface processes compared to the Noah LSM. Thus, using prediction models with improved skill implies improved predictability and greater certainty of prediction.

  8. Impact of measurement uncertainty from experimental load distribution factors on bridge load rating

    NASA Astrophysics Data System (ADS)

    Gangone, Michael V.; Whelan, Matthew J.

    2018-03-01

    Load rating and testing of highway bridges is important in determining the capacity of the structure. Experimental load rating utilizes strain transducers placed at critical locations of the superstructure to measure normal strains. These strains are then used in computing diagnostic performance measures (neutral axis of bending, load distribution factor) and ultimately a load rating. However, it has been shown that experimentally obtained strain measurements contain uncertainties associated with the accuracy and precision of the sensor and sensing system. These uncertainties propagate through to the diagnostic indicators that in turn transmit into the load rating calculation. This paper will analyze the effect that measurement uncertainties have on the experimental load rating results of a 3 span multi-girder/stringer steel and concrete bridge. The focus of this paper will be limited to the uncertainty associated with the experimental distribution factor estimate. For the testing discussed, strain readings were gathered at the midspan of each span of both exterior girders and the center girder. Test vehicles of known weight were positioned at specified locations on each span to generate maximum strain response for each of the five girders. The strain uncertainties were used in conjunction with a propagation formula developed by the authors to determine the standard uncertainty in the distribution factor estimates. This distribution factor uncertainty is then introduced into the load rating computation to determine the possible range of the load rating. The results show the importance of understanding measurement uncertainty in experimental load testing.

  9. Preliminary Seismic Probabilistic Tsunami Hazard Map for Italy

    NASA Astrophysics Data System (ADS)

    Lorito, Stefano; Selva, Jacopo; Basili, Roberto; Grezio, Anita; Molinari, Irene; Piatanesi, Alessio; Romano, Fabrizio; Tiberti, Mara Monica; Tonini, Roberto; Bonini, Lorenzo; Michelini, Alberto; Macias, Jorge; Castro, Manuel J.; González-Vida, José Manuel; de la Asunción, Marc

    2015-04-01

    We present a preliminary release of the first seismic probabilistic tsunami hazard map for Italy. The map aims to become an important tool for the Italian Department of Civil Protection (DPC), as well as a support tool for the NEAMTWS Tsunami Service Provider, the Centro Allerta Tsunami (CAT) at INGV, Rome. The map shows the offshore maximum tsunami elevation expected for several average return periods. Both crustal and subduction earthquakes are considered. The probability for each scenario (location, depth, mechanism, source size, magnitude and temporal rate) is defined on a uniform grid covering the entire Mediterranean for crustal earthquakes and on the plate interface for subduction earthquakes. Activity rates are assigned from seismic catalogues and basing on a tectonic regionalization of the Mediterranean area. The methodology explores the associated aleatory uncertainty through the innovative application of an Event Tree. Main sources of epistemic uncertainty are also addressed although in preliminary way. The whole procedure relies on a database of pre-calculated Gaussian-shaped Green's functions for the sea level elevation, to be used also as a real time hazard assessment tool by CAT. Tsunami simulations are performed using the non-linear shallow water multi-GPU code HySEA, over a 30 arcsec bathymetry (from the SRTM30+ dataset) and the maximum elevations are stored at the 50-meter isobath and then extrapolated through the Green's law at 1 meter depth. This work is partially funded by project ASTARTE - Assessment, Strategy And Risk Reduction for Tsunamis in Europe - FP7-ENV2013 6.4-3, Grant 603839, and by the Italian flagship project RITMARE.

  10. Focused Belief Measures for Uncertainty Quantification in High Performance Semantic Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joslyn, Cliff A.; Weaver, Jesse R.

    In web-scale semantic data analytics there is a great need for methods which aggregate uncertainty claims, on the one hand respecting the information provided as accurately as possible, while on the other still being tractable. Traditional statistical methods are more robust, but only represent distributional, additive uncertainty. Generalized information theory methods, including fuzzy systems and Dempster-Shafer (DS) evidence theory, represent multiple forms of uncertainty, but are computationally and methodologically difficult. We require methods which provide an effective balance between the complete representation of the full complexity of uncertainty claims in their interaction, while satisfying the needs of both computational complexitymore » and human cognition. Here we build on J{\\o}sang's subjective logic to posit methods in focused belief measures (FBMs), where a full DS structure is focused to a single event. The resulting ternary logical structure is posited to be able to capture the minimal amount of generalized complexity needed at a maximum of computational efficiency. We demonstrate the efficacy of this approach in a web ingest experiment over the 2012 Billion Triple dataset from the Semantic Web Challenge.« less

  11. Probabilistic Storm Surge Forecast For Venice

    NASA Astrophysics Data System (ADS)

    Mel, Riccardo; Lionello, Piero

    2013-04-01

    This study describes an ensemble storm surge prediction procedure for the city of Venice, which is potentially very useful for its management, maintenance and for operating the movable barriers that are presently being built. Ensemble Prediction System (EPS) is meant to complement the existing SL forecast system by providing a probabilistic forecast and information on uncertainty of SL prediction. The procedure is applied to storm surge events in the period 2009-2010 producing for each of them an ensemble of 50 simulations. It is shown that EPS slightly increases the accuracy of SL prediction with respect to the deterministic forecast (DF) and it is more reliable than it. Though results are low biased and forecast uncertainty is underestimated, the probability distribution of maximum sea level produced by the EPS is acceptably realistic. The error of the EPS mean is shown to be correlated with the EPS spread. SL peaks correspond to maxima of uncertainty and uncertainty increases linearly with the forecast range. The quasi linear dynamics of the storm surges produces a modulation of the uncertainty after the SL peak with period corresponding to that of the main Adriatic seiche.

  12. Quantification of variability and uncertainty for air toxic emission inventories with censored emission factor data.

    PubMed

    Frey, H Christopher; Zhao, Yuchao

    2004-11-15

    Probabilistic emission inventories were developed for urban air toxic emissions of benzene, formaldehyde, chromium, and arsenic for the example of Houston. Variability and uncertainty in emission factors were quantified for 71-97% of total emissions, depending upon the pollutant and data availability. Parametric distributions for interunit variability were fit using maximum likelihood estimation (MLE), and uncertainty in mean emission factors was estimated using parametric bootstrap simulation. For data sets containing one or more nondetected values, empirical bootstrap simulation was used to randomly sample detection limits for nondetected values and observations for sample values, and parametric distributions for variability were fit using MLE estimators for censored data. The goodness-of-fit for censored data was evaluated by comparison of cumulative distributions of bootstrap confidence intervals and empirical data. The emission inventory 95% uncertainty ranges are as small as -25% to +42% for chromium to as large as -75% to +224% for arsenic with correlated surrogates. Uncertainty was dominated by only a few source categories. Recommendations are made for future improvements to the analysis.

  13. Model uncertainty of various settlement estimation methods in shallow tunnels excavation; case study: Qom subway tunnel

    NASA Astrophysics Data System (ADS)

    Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb

    2017-10-01

    In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.

  14. Fundamental uncertainty limit of optical flow velocimetry according to Heisenberg's uncertainty principle.

    PubMed

    Fischer, Andreas

    2016-11-01

    Optical flow velocity measurements are important for understanding the complex behavior of flows. Although a huge variety of methods exist, they are either based on a Doppler or a time-of-flight measurement principle. Doppler velocimetry evaluates the velocity-dependent frequency shift of light scattered at a moving particle, whereas time-of-flight velocimetry evaluates the traveled distance of a scattering particle per time interval. Regarding the aim of achieving a minimal measurement uncertainty, it is unclear if one principle allows to achieve lower uncertainties or if both principles can achieve equal uncertainties. For this reason, the natural, fundamental uncertainty limit according to Heisenberg's uncertainty principle is derived for Doppler and time-of-flight measurement principles, respectively. The obtained limits of the velocity uncertainty are qualitatively identical showing, e.g., a direct proportionality for the absolute value of the velocity to the power of 32 and an indirect proportionality to the square root of the scattered light power. Hence, both measurement principles have identical potentials regarding the fundamental uncertainty limit due to the quantum mechanical behavior of photons. This fundamental limit can be attained (at least asymptotically) in reality either with Doppler or time-of-flight methods, because the respective Cramér-Rao bounds for dominating photon shot noise, which is modeled as white Poissonian noise, are identical with the conclusions from Heisenberg's uncertainty principle.

  15. Bayesian Framework for Water Quality Model Uncertainty Estimation and Risk Management

    EPA Science Inventory

    A formal Bayesian methodology is presented for integrated model calibration and risk-based water quality management using Bayesian Monte Carlo simulation and maximum likelihood estimation (BMCML). The primary focus is on lucid integration of model calibration with risk-based wat...

  16. Space Radiation Risks for Astronauts on Multiple International Space Station Missions

    PubMed Central

    Cucinotta, Francis A.

    2014-01-01

    Mortality and morbidity risks from space radiation exposure are an important concern for astronauts participating in International Space Station (ISS) missions. NASA’s radiation limits set a 3% cancer fatality probability as the upper bound of acceptable risk and considers uncertainties in risk predictions using the upper 95% confidence level (CL) of the assessment. In addition to risk limitation, an important question arises as to the likelihood of a causal association between a crew-members’ radiation exposure in the past and a diagnosis of cancer. For the first time, we report on predictions of age and sex specific cancer risks, expected years of life-loss for specific diseases, and probability of causation (PC) at different post-mission times for participants in 1-year or multiple ISS missions. Risk projections with uncertainty estimates are within NASA acceptable radiation standards for mission lengths of 1-year or less for likely crew demographics. However, for solar minimum conditions upper 95% CL exceed 3% risk of exposure induced death (REID) by 18 months or 24 months for females and males, respectively. Median PC and upper 95%-confidence intervals are found to exceed 50% for several cancers for participation in two or more ISS missions of 18 months or longer total duration near solar minimum, or for longer ISS missions at other phases of the solar cycle. However, current risk models only consider estimates of quantitative differences between high and low linear energy transfer (LET) radiation. We also make predictions of risk and uncertainties that would result from an increase in tumor lethality for highly ionizing radiation reported in animal studies, and the additional risks from circulatory diseases. These additional concerns could further reduce the maximum duration of ISS missions within acceptable risk levels, and will require new knowledge to properly evaluate. PMID:24759903

  17. Wavelet extractor: A Bayesian well-tie and wavelet extraction program

    NASA Astrophysics Data System (ADS)

    Gunning, James; Glinsky, Michael E.

    2006-06-01

    We introduce a new open-source toolkit for the well-tie or wavelet extraction problem of estimating seismic wavelets from seismic data, time-to-depth information, and well-log suites. The wavelet extraction model is formulated as a Bayesian inverse problem, and the software will simultaneously estimate wavelet coefficients, other parameters associated with uncertainty in the time-to-depth mapping, positioning errors in the seismic imaging, and useful amplitude-variation-with-offset (AVO) related parameters in multi-stack extractions. It is capable of multi-well, multi-stack extractions, and uses continuous seismic data-cube interpolation to cope with the problem of arbitrary well paths. Velocity constraints in the form of checkshot data, interpreted markers, and sonic logs are integrated in a natural way. The Bayesian formulation allows computation of full posterior uncertainties of the model parameters, and the important problem of the uncertain wavelet span is addressed uses a multi-model posterior developed from Bayesian model selection theory. The wavelet extraction tool is distributed as part of the Delivery seismic inversion toolkit. A simple log and seismic viewing tool is included in the distribution. The code is written in Java, and thus platform independent, but the Seismic Unix (SU) data model makes the inversion particularly suited to Unix/Linux environments. It is a natural companion piece of software to Delivery, having the capacity to produce maximum likelihood wavelet and noise estimates, but will also be of significant utility to practitioners wanting to produce wavelet estimates for other inversion codes or purposes. The generation of full parameter uncertainties is a crucial function for workers wishing to investigate questions of wavelet stability before proceeding to more advanced inversion studies.

  18. A new 1649-1884 catalog of destructive earthquakes near Tokyo and implications for the long-term seismic process

    USGS Publications Warehouse

    Grunewald, E.D.; Stein, R.S.

    2006-01-01

    In order to assess the long-term character of seismicity near Tokyo, we construct an intensity-based catalog of damaging earthquakes that struck the greater Tokyo area between 1649 and 1884. Models for 15 historical earthquakes are developed using calibrated intensity attenuation relations that quantitatively convey uncertainties in event location and magnitude, as well as their covariance. The historical catalog is most likely complete for earthquakes M ??? 6.7; the largest earthquake in the catalog is the 1703 M ??? 8.2 Genroku event. Seismicity rates from 80 years of instrumental records, which include the 1923 M = 7.9 Kanto shock, as well as interevent times estimated from the past ???7000 years of paleoseismic data, are combined with the historical catalog to define a frequency-magnitude distribution for 4.5 ??? M ??? 8.2, which is well described by a truncated Gutenberg-Richter relation with a b value of 0.96 and a maximum magnitude of 8.4. Large uncertainties associated with the intensity-based catalog are propagated by a Monte Carlo simulation to estimations of the scalar moment rate. The resulting best estimate of moment rate during 1649-2003 is 1.35 ?? 1026 dyn cm yr-1 with considerable uncertainty at the 1??, level: (-0.11, + 0.20) ?? 1026 dyn cm yr-1. Comparison with geodetic models of the interseismic deformation indicates that the geodetic moment accumulation and likely moment release rate are roughly balanced over the catalog period. This balance suggests that the extended catalog is representative of long-term seismic processes near Tokyo and so can be used to assess earthquake probabilities. The resulting Poisson (or time-averaged) 30-year probability for M ??? 7.9 earthquakes is 7-11%.

  19. Space radiation risks for astronauts on multiple International Space Station missions.

    PubMed

    Cucinotta, Francis A

    2014-01-01

    Mortality and morbidity risks from space radiation exposure are an important concern for astronauts participating in International Space Station (ISS) missions. NASA's radiation limits set a 3% cancer fatality probability as the upper bound of acceptable risk and considers uncertainties in risk predictions using the upper 95% confidence level (CL) of the assessment. In addition to risk limitation, an important question arises as to the likelihood of a causal association between a crew-members' radiation exposure in the past and a diagnosis of cancer. For the first time, we report on predictions of age and sex specific cancer risks, expected years of life-loss for specific diseases, and probability of causation (PC) at different post-mission times for participants in 1-year or multiple ISS missions. Risk projections with uncertainty estimates are within NASA acceptable radiation standards for mission lengths of 1-year or less for likely crew demographics. However, for solar minimum conditions upper 95% CL exceed 3% risk of exposure induced death (REID) by 18 months or 24 months for females and males, respectively. Median PC and upper 95%-confidence intervals are found to exceed 50% for several cancers for participation in two or more ISS missions of 18 months or longer total duration near solar minimum, or for longer ISS missions at other phases of the solar cycle. However, current risk models only consider estimates of quantitative differences between high and low linear energy transfer (LET) radiation. We also make predictions of risk and uncertainties that would result from an increase in tumor lethality for highly ionizing radiation reported in animal studies, and the additional risks from circulatory diseases. These additional concerns could further reduce the maximum duration of ISS missions within acceptable risk levels, and will require new knowledge to properly evaluate.

  20. Consistency assessment of rating curve data in various locations using Bidirectional Reach (BReach)

    NASA Astrophysics Data System (ADS)

    Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Coxon, Gemma; Freer, Jim; Verhoest, Niko E. C.

    2017-10-01

    When estimating discharges through rating curves, temporal data consistency is a critical issue. In this research, consistency in stage-discharge data is investigated using a methodology called Bidirectional Reach (BReach), which departs from a (in operational hydrology) commonly used definition of consistency. A period is considered to be consistent if no consecutive and systematic deviations from a current situation occur that exceed observational uncertainty. Therefore, the capability of a rating curve model to describe a subset of the (chronologically sorted) data is assessed in each observation by indicating the outermost data points for which the rating curve model behaves satisfactorily. These points are called the maximum left or right reach, depending on the direction of the investigation. This temporal reach should not be confused with a spatial reach (indicating a part of a river). Changes in these reaches throughout the data series indicate possible changes in data consistency and if not resolved could introduce additional errors and biases. In this research, various measurement stations in the UK, New Zealand and Belgium are selected based on their significant historical ratings information and their specific characteristics related to data consistency. For each country, regional information is maximally used to estimate observational uncertainty. Based on this uncertainty, a BReach analysis is performed and, subsequently, results are validated against available knowledge about the history and behavior of the site. For all investigated cases, the methodology provides results that appear to be consistent with this knowledge of historical changes and thus facilitates a reliable assessment of (in)consistent periods in stage-discharge measurements. This assessment is not only useful for the analysis and determination of discharge time series, but also to enhance applications based on these data (e.g., by informing hydrological and hydraulic model evaluation design about consistent time periods to analyze).

  1. When, not if: The inescapability of an uncertain future

    NASA Astrophysics Data System (ADS)

    Lewandowsky, S.; Ballard, T.

    2014-12-01

    Uncertainty is an inherent feature of most scientific endeavours, and many political decisions must be made in the presence of scientific uncertainty. In the case of climate change, there is evidence that greater scientific uncertainty increases the risk associated with the impact of climate change. Scientific uncertainty thus provides an impetus for cutting emissions rather than delaying action. In contrast to those normative considerations, uncertainty is frequently cited in political and public discourse as a reason to delay mitigation. We examine ways in which this gap between public and scientific understanding of uncertainty can be bridged. In particular, we sought ways to communicate uncertainty in a way that better calibrates people's risk perceptions with the projected impact of climate change. We report two behavioural experiments in which uncertainty about the future was expressed either as outcome uncertainty or temporal uncertainty. The conventional presentation of uncertainty involves uncertainty about an outcome at a given time—for example, the range of possible sea level rise (say 50cm +/- 20cm) by a certain date. An alternative presentation of the same situation presents a certain outcome ("sea levels will rise by 50cm") but places the uncertainty into the time of arrival ("this may occur as early as 2040 or as late as 2080"). We presented participants with a series of statements and graphs indicating projected increases in temperature, sea levels, ocean acidification, and a decrease in artic sea ice. In the uncertain magnitude condition, the statements and graphs reported the upper and lower confidence bounds of the projected magnitude and the mean projected time of arrival. In the uncertain time of arrival condition, they reported the upper and lower confidence bounds of the projected time of arrival and the mean projected magnitude. The results show that when uncertainty was presented as uncertain time of arrival rather than an uncertain outcome, people expressed greater concern about the projected outcomes. In a further experiment involving repeated "games" with a simulated economy, we similarly showed that people allocate more resources to mitigation if there is uncertainty about the timing of an adverse event rather than about the magnitude of its impact.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardin, Ernest; Hadgu, Teklu; Greenberg, Harris

    This report is one follow-on to a study of reference geologic disposal design concepts (Hardin et al. 2011a). Based on an analysis of maximum temperatures, that study concluded that certain disposal concepts would require extended decay storage prior to emplacement, or the use of small waste packages, or both. The study used nominal values for thermal properties of host geologic media and engineered materials, demonstrating the need for uncertainty analysis to support the conclusions. This report is a first step that identifies the input parameters of the maximum temperature calculation, surveys published data on measured values, uses an analytical approachmore » to determine which parameters are most important, and performs an example sensitivity analysis. Using results from this first step, temperature calculations planned for FY12 can focus on only the important parameters, and can use the uncertainty ranges reported here. The survey of published information on thermal properties of geologic media and engineered materials, is intended to be sufficient for use in generic calculations to evaluate the feasibility of reference disposal concepts. A full compendium of literature data is beyond the scope of this report. The term “uncertainty” is used here to represent both measurement uncertainty and spatial variability, or variability across host geologic units. For the most important parameters (e.g., buffer thermal conductivity) the extent of literature data surveyed samples these different forms of uncertainty and variability. Finally, this report is intended to be one chapter or section of a larger FY12 deliverable summarizing all the work on design concepts and thermal load management for geologic disposal (M3FT-12SN0804032, due 15Aug2012).« less

  3. Numerical modelling of instantaneous plate tectonics

    NASA Technical Reports Server (NTRS)

    Minster, J. B.; Haines, E.; Jordan, T. H.; Molnar, P.

    1974-01-01

    Assuming lithospheric plates to be rigid, 68 spreading rates, 62 fracture zones trends, and 106 earthquake slip vectors are systematically inverted to obtain a self-consistent model of instantaneous relative motions for eleven major plates. The inverse problem is linearized and solved iteratively by a maximum-likelihood procedure. Because the uncertainties in the data are small, Gaussian statistics are shown to be adequate. The use of a linear theory permits (1) the calculation of the uncertainties in the various angular velocity vectors caused by uncertainties in the data, and (2) quantitative examination of the distribution of information within the data set. The existence of a self-consistent model satisfying all the data is strong justification of the rigid plate assumption. Slow movement between North and South America is shown to be resolvable.

  4. Compilation of minimum and maximum isotope ratios of selected elements in naturally occurring terrestrial materials and reagents

    USGS Publications Warehouse

    Coplen, T.B.; Hopple, J.A.; Böhlke, J.K.; Peiser, H.S.; Rieder, S.E.; Krouse, H.R.; Rosman, K.J.R.; Ding, T.; Vocke, R.D.; Revesz, K.M.; Lamberty, A.; Taylor, P.; De Bievre, P.

    2002-01-01

    Documented variations in the isotopic compositions of some chemical elements are responsible for expanded uncertainties in the standard atomic weights published by the Commission on Atomic Weights and Isotopic Abundances of the International Union of Pure and Applied Chemistry. This report summarizes reported variations in the isotopic compositions of 20 elements that are due to physical and chemical fractionation processes (not due to radioactive decay) and their effects on the standard atomic weight uncertainties. For 11 of those elements (hydrogen, lithium, boron, carbon, nitrogen, oxygen, silicon, sulfur, chlorine, copper, and selenium), standard atomic weight uncertainties have been assigned values that are substantially larger than analytical uncertainties because of common isotope abundance variations in materials of natural terrestrial origin. For 2 elements (chromium and thallium), recently reported isotope abundance variations potentially are large enough to result in future expansion of their atomic weight uncertainties. For 7 elements (magnesium, calcium, iron, zinc, molybdenum, palladium, and tellurium), documented isotope-abundance variations in materials of natural terrestrial origin are too small to have a significant effect on their standard atomic weight uncertainties. This compilation indicates the extent to which the atomic weight of an element in a given material may differ from the standard atomic weight of the element. For most elements given above, data are graphically illustrated by a diagram in which the materials are specified in the ordinate and the compositional ranges are plotted along the abscissa in scales of (1) atomic weight, (2) mole fraction of a selected isotope, and (3) delta value of a selected isotope ratio. There are no internationally distributed isotopic reference materials for the elements zinc, selenium, molybdenum, palladium, and tellurium. Preparation of such materials will help to make isotope ratio measurements among laboratories comparable. The minimum and maximum concentrations of a selected isotope in naturally occurring terrestrial materials for selected chemical elements reviewed in this report are given below: Isotope Minimum mole fraction Maximum mole fraction -------------------------------------------------------------------------------- 2H 0 .000 0255 0 .000 1838 7Li 0 .9227 0 .9278 11B 0 .7961 0 .8107 13C 0 .009 629 0 .011 466 15N 0 .003 462 0 .004 210 18O 0 .001 875 0 .002 218 26Mg 0 .1099 0 .1103 30Si 0 .030 816 0 .031 023 34S 0 .0398 0 .0473 37Cl 0 .240 77 0 .243 56 44Ca 0 .020 82 0 .020 92 53Cr 0 .095 01 0 .095 53 56Fe 0 .917 42 0 .917 60 65Cu 0 .3066 0 .3102 205Tl 0 .704 72 0 .705 06 The numerical values above have uncertainties that depend upon the uncertainties of the determinations of the absolute isotope-abundance variations of reference materials of the elements. Because reference materials used for absolute isotope-abundance measurements have not been included in relative isotope abundance investigations of zinc, selenium, molybdenum, palladium, and tellurium, ranges in isotopic composition are not listed for these elements, although such ranges may be measurable with state-of-the-art mass spectrometry. This report is available at the url: http://pubs.water.usgs.gov/wri014222.

  5. On the Timing of Glacial Terminations in the Equatorial Pacific

    NASA Astrophysics Data System (ADS)

    Khider, D.; Ahn, S.; Lisiecki, L. E.; Lawrence, C.; Kienast, M.

    2015-12-01

    Understanding the mechanisms through which the climate system responds to orbital insolation changes requires establishing the timing of events imprinted on the geological record. In this study, we investigate the relative timing of the glacial terminations across the equatorial Pacific in order to identify a possible mechanism through which the tropics may have influenced a global climate response. The relative termination timing between the eastern and western equatorial Pacific was assessed from 15 published SST records based on Globigerinoides ruber Mg/Ca or alkenone thermometry. The novelty of our study lies in the accounting of the various sources of uncertainty inherent to paleoclimate reconstruction and timing analysis. Specifically, we use a Monte-Carlo process allowing sampling of possible realizations of the time series that are functions of the uncertainty of the benthic δ18O alignment to a global benthic curve, of the SST uncertainty, and of the uncertainty in the change point, which we use as a definition for the termination timing. We find that the uncertainty on the relative timing estimates is on the order of several thousand years, and stems from age model uncertainty (60%) and the uncertainty in the change point detection (40%). Random sources of uncertainty are the main contributor, and, therefore, averaging over a large datasets and/or higher resolution records should yield more precise and accurate estimates of the relative lead-lag. However, at this time, the number of records is not sufficient to identify any significant differences in the timing of the last three glacial terminations in SST records from the Eastern and Western Tropical Pacific.

  6. Applications of flood depth from rapid post-event footprint generation

    NASA Astrophysics Data System (ADS)

    Booth, Naomi; Millinship, Ian

    2015-04-01

    Immediately following large flood events, an indication of the area flooded (i.e. the flood footprint) can be extremely useful for evaluating potential impacts on exposed property and infrastructure. Specifically, such information can help insurance companies estimate overall potential losses, deploy claims adjusters and ultimately assists the timely payment of due compensation to the public. Developing these datasets from remotely sensed products seems like an obvious choice. However, there are a number of important drawbacks which limit their utility in the context of flood risk studies. For example, external agencies have no control over the region that is surveyed, the time at which it is surveyed (which is important as the maximum extent would ideally be captured), and how freely accessible the outputs are. Moreover, the spatial resolution of these datasets can be low, and considerable uncertainties in the flood extents exist where dry surfaces give similar return signals to water. Most importantly of all, flood depths are required to estimate potential damages, but generally cannot be estimated from satellite imagery alone. In response to these problems, we have developed an alternative methodology for developing high-resolution footprints of maximum flood extent which do contain depth information. For a particular event, once reports of heavy rainfall are received, we begin monitoring real-time flow data and extracting peak values across affected areas. Next, using statistical extreme value analyses of historic flow records at the same measured locations, the return periods of the maximum event flow at each gauged location are estimated. These return periods are then interpolated along each river and matched to JBA's high-resolution hazard maps, which already exist for a series of design return periods. The extent and depth of flooding associated with the event flow is extracted from the hazard maps to create a flood footprint. Georeferenced ground, aerial and satellite images are used to establish defence integrity, highlight breach locations and validate our footprint. We have implemented this method to create seven flood footprints, including river flooding in central Europe and coastal flooding associated with Storm Xaver in the UK (both in 2013). The inclusion of depth information allows damages to be simulated and compared to actual damage and resultant loss which become available after the event. In this way, we can evaluate depth-damage functions used in catastrophe models and reduce their associated uncertainty. In further studies, the depth data could be used at an individual property level to calibrate property type specific depth-damage functions.

  7. Accurate Structural Correlations from Maximum Likelihood Superpositions

    PubMed Central

    Theobald, Douglas L; Wuttke, Deborah S

    2008-01-01

    The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR) models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA) of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method (“PCA plots”) for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology. PMID:18282091

  8. Development of a primary diffusion source of organic vapors for gas analyzer calibration

    NASA Astrophysics Data System (ADS)

    Lecuna, M.; Demichelis, A.; Sassi, G.; Sassi, M. P.

    2018-03-01

    The generation of reference mixtures of volatile organic compounds (VOCs) at trace levels (10 ppt-10 ppb) is a challenge for both environmental and clinical measurements. The calibration of gas analyzers for trace VOC measurements requires a stable and accurate source of the compound of interest. The dynamic preparation of gas mixtures by diffusion is a suitable method for fulfilling these requirements. The estimation of the uncertainty of the molar fraction of the VOC in the mixture is a key step in the metrological characterization of a dynamic generator. The performance of a dynamic generator was monitored over a wide range of operating conditions. The generation system was simulated by a model developed with computational fluid dynamics and validated against experimental data. The vapor pressure of the VOC was found to be one of the main contributors to the uncertainty of the diffusion rate and its influence at 10-70 kPa was analyzed and discussed. The air buoyancy effect and perturbations due to the weighing duration were studied. The gas carrier flow rate and the amount of liquid in the vial were found to play a role in limiting the diffusion rate. The results of sensitivity analyses were reported through an uncertainty budget for the diffusion rate. The roles of each influence quantity were discussed. A set of criteria to minimize the uncertainty contribution to the primary diffusion source (25 µg min-1) were estimated: carrier gas flow rate higher than 37.7 sml min-1, a maximum VOC liquid mass decrease in the vial of 4.8 g, a minimum residual mass of 1 g and vial weighing times of 1-3 min. With this procedure a limit uncertainty of 0.5% in the diffusion rate can be obtained for VOC mixtures at trace levels (10 ppt-10 ppb), making the developed diffusion vials a primary diffusion source with potential to become a new reference material for trace VOC analysis.

  9. Uncertainty in the Work-Place: Hierarchical Differences of Uncertainty Levels and Reduction Strategies.

    ERIC Educational Resources Information Center

    Petelle, John L.; And Others

    A study examined the uncertainty levels and types reported by supervisors and employees at three hierarchical levels of an organization: first-line supervisors, full-time employees, and part-time employees. It investigated differences in uncertainty-reduction strategies employed by these three hierarchical groups. The 61 subjects who completed…

  10. Brain strain uncertainty due to shape variation in and simplification of head angular velocity profiles.

    PubMed

    Zhao, Wei; Ji, Songbai

    2017-04-01

    Head angular velocity, instead of acceleration, is more predictive of brain strains. Surprisingly, no study exists that investigates how shape variation in angular velocity profiles affects brain strains, beyond characteristics such as peak magnitude and impulse duration. In this study, we evaluated brain strain uncertainty due to variation in angular velocity profiles and further compared with that resulting from simplifying the profiles into idealized shapes. To do so, we used reconstructed head impacts from American National Football League for shape extraction and simulated head uniaxial coronal rotations from onset to full stop. The velocity profiles were scaled to maintain an identical peak velocity magnitude and duration in order to isolate the shape for investigation. Element-wise peak maximum principal strains from 44 selected impacts were obtained. We found that the shape of angular velocity profile could significantly affect brain strain magnitude (e.g., percentage difference of 4.29-17.89 % in the whole brain relative to the group average, with cumulative strain damage measure (CSDM) uncertainty range of 23.9 %) but not pattern (correlation coefficient of 0.94-0.99). Strain differences resulting from simplifying angular velocity profiles into idealized shapes were largely within the range due to shape variation, in both percentage difference and CSDM (signed difference of 3.91 % on average, with a typical range of 0-6 %). These findings provide important insight into the uncertainty or confidence in the performance of kinematics-based injury metrics. More importantly, they suggest the feasibility to simplify head angular velocity profiles into idealized shapes, at least within the confinements of the profiles evaluated, to enable real-time strain estimation via pre-computation in the future.

  11. Brain strain uncertainty due to shape variation in and simplification of head angular velocity profiles

    PubMed Central

    Zhao, Wei; Ji, Songbai

    2016-01-01

    Head angular velocity, instead of acceleration, is more predictive of brain strains. Surprisingly, no study exists that investigates how shape variation in angular velocity profiles affects brain strains, beyond characteristics such as peak magnitude and impulse duration. In this study, we evaluated brain strain uncertainty due to variation in angular velocity profiles, and further compared with that resulting from simplifying the profiles into idealized shapes. To do so, we used reconstructed head impacts from American National Football League for shape extraction, and simulated head uniaxial coronal rotations from onset to full stop. The velocity profiles were scaled to maintain an identical peak velocity magnitude and duration in order to isolate the shape for investigation. Element-wise peak maximum principal strains from 44 selected impacts were obtained. We found that the shape of angular velocity profile could significantly affect brain strain magnitude (e.g., percentage difference of 4.29–17.89% in the whole-brain relative to the group average, with cumulative strain damage measure (CSDM) uncertainty range of 23.9%) but not pattern (correlation coefficient of 0.94–0.99). Strain differences resulting from simplifying angular velocity profiles into idealized shapes were largely within the range due to shape variation, in both percentage difference and CSDM (signed difference of 3.91% on average, with a typical range of 0–6%). These findings provide important insight into the uncertainty or confidence in the performance of kinematics-based injury metrics. More importantly, they suggest the feasibility to simplify head angular velocity profiles into idealized shapes, at least within the confinements of the profiles evaluated, to enable real-time strain estimation via pre-computation in the future. PMID:27644441

  12. Siting and Routing Assessment for Solid Waste Management Under Uncertainty Using the Grey Mini-Max Regret Criterion

    NASA Astrophysics Data System (ADS)

    Chang, Ni-Bin; Davila, Eric

    2006-10-01

    Solid waste management (SWM) is at the forefront of environmental concerns in the Lower Rio Grande Valley (LRGV), South Texas. The complexity in SWM drives area decision makers to look for innovative and forward-looking solutions to address various waste management options. In decision analysis, it is not uncommon for decision makers to go by an option that may minimize the maximum regret when some determinant factors are vague, ambiguous, or unclear. This article presents an innovative optimization model using the grey mini-max regret (GMMR) integer programming algorithm to outline an optimal regional coordination of solid waste routing and possible landfill/incinerator construction under an uncertain environment. The LRGV is an ideal location to apply the GMMR model for SWM planning because of its constant urban expansion, dwindling landfill space, and insufficient data availability signifying the planning uncertainty combined with vagueness in decision-making. The results give local decision makers hedged sets of options that consider various forms of systematic and event-based uncertainty. By extending the dimension of decision-making, this may lead to identifying a variety of beneficial solutions with efficient waste routing and facility siting for the time frame of 2005 through 2010 in LRGV. The results show the ability of the GMMR model to open insightful scenario planning that can handle situational and data-driven uncertainty in a way that was previously unavailable. Research findings also indicate that the large capital investment of incineration facilities makes such an option less competitive among municipal options for landfills. It is evident that the investment from a municipal standpoint is out of the question, but possible public-private partnerships may alleviate this obstacle.

  13. Uncertainties in Isoprene Photochemistry and Emissions: Implications for the Oxidative Capacity of Past and Present Atmospheres and for Climate Forcing Agents

    NASA Technical Reports Server (NTRS)

    Achakulwisut, P.; Mickley, L. J.; Murray, Lee; Tai, A.P.K.; Kaplan, J.O.; Alexander, B.

    2015-01-01

    Current understanding of the factors controlling biogenic isoprene emissions and of the fate of isoprene oxidation products in the atmosphere has been evolving rapidly. We use a climate-biosphere-chemistry modeling framework to evaluate the sensitivity of estimates of the tropospheric oxidative capacity to uncertainties in isoprene emissions and photochemistry. Our work focuses on trends across two time horizons: from the Last Glacial Maximum (LGM, 21 000 years BP) to the preindustrial (1770s); and from the preindustrial to the present day (1990s). We find that different oxidants have different sensitivities to the uncertainties tested in this study, with OH being the most sensitive: changes in the global mean OH levels for the LGM-to-preindustrial transition range between -29 and +7, and those for the preindustrial-to-present day transition range between -8 and +17, across our simulations. Our results suggest that the observed glacial-interglacial variability in atmospheric methane concentrations is predominantly driven by changes in methane sources as opposed to changes in OH, the primary methane sink. However, the magnitudes of change are subject to uncertainties in the past isoprene global burdens, as are estimates of the change in the global burden of secondary organic aerosol (SOA) relative to the preindustrial. We show that the linear relationship between tropospheric mean OH and tropospheric mean ozone photolysis rates, water vapor, and total emissions of NOx and reactive carbon first reported in Murray et al. (2014) does not hold across all periods with the new isoprene photochemistry mechanism. Our results demonstrate that inadequacies in our understanding of present-day OH and its controlling factors must be addressed in order to improve model estimates of the oxidative capacity of past and present atmospheres.

  14. Usage of ensemble geothermal models to consider geological uncertainties

    NASA Astrophysics Data System (ADS)

    Rühaak, Wolfram; Steiner, Sarah; Welsch, Bastian; Sass, Ingo

    2015-04-01

    The usage of geothermal energy for instance by borehole heat exchangers (BHE) is a promising concept for a sustainable supply of heat for buildings. BHE are closed pipe systems, in which a fluid is circulating. Heat from the surrounding rocks is transferred to the fluid purely by conduction. The fluid carries the heat to the surface, where it can be utilized. Larger arrays of BHE require typically previous numerical models. Motivations are the design of the system (number and depth of the required BHE) but also regulatory reasons. Especially such regulatory operating permissions often require maximum realistic models. Although such realistic models are possible in many cases with today's codes and computer resources, they are often expensive in terms of time and effort. A particular problem is the knowledge about the accuracy of the achieved results. An issue, which is often neglected while dealing with highly complex models, is the quantification of parameter uncertainties as a consequence of the natural heterogeneity of the geological subsurface. Experience has shown, that these heterogeneities can lead to wrong forecasts. But also variations in the technical realization and especially of the operational parameters (which are mainly a consequence of the regional climate) can lead to strong variations in the simulation results. Instead of one very detailed single forecast model, it should be considered, to model numerous more simple models. By varying parameters, the presumed subsurface uncertainties, but also the uncertainties in the presumed operational parameters can be reflected. Finally not only one single result should be reported, but instead the range of possible solutions and their respective probabilities. In meteorology such an approach is well known as ensemble-modeling. The concept is demonstrated at a real world data set and discussed.

  15. Modeling spatiotemporal dynamics of global wetlands: comprehensive evaluation of a new sub-grid TOPMODEL parameterization and uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Zimmermann, Niklaus E.; Kaplan, Jed O.; Poulter, Benjamin

    2016-03-01

    Simulations of the spatiotemporal dynamics of wetlands are key to understanding the role of wetland biogeochemistry under past and future climate. Hydrologic inundation models, such as the TOPography-based hydrological model (TOPMODEL), are based on a fundamental parameter known as the compound topographic index (CTI) and offer a computationally cost-efficient approach to simulate wetland dynamics at global scales. However, there remains a large discrepancy in the implementations of TOPMODEL in land-surface models (LSMs) and thus their performance against observations. This study describes new improvements to TOPMODEL implementation and estimates of global wetland dynamics using the LPJ-wsl (Lund-Potsdam-Jena Wald Schnee und Landschaft version) Dynamic Global Vegetation Model (DGVM) and quantifies uncertainties by comparing three digital elevation model (DEM) products (HYDRO1k, GMTED, and HydroSHEDS) at different spatial resolution and accuracy on simulated inundation dynamics. In addition, we found that calibrating TOPMODEL with a benchmark wetland data set can help to successfully delineate the seasonal and interannual variation of wetlands, as well as improve the spatial distribution of wetlands to be consistent with inventories. The HydroSHEDS DEM, using a river-basin scheme for aggregating the CTI, shows the best accuracy for capturing the spatiotemporal dynamics of wetlands among the three DEM products. The estimate of global wetland potential/maximum is ˜ 10.3 Mkm2 (106 km2), with a mean annual maximum of ˜ 5.17 Mkm2 for 1980-2010. When integrated with wetland methane emission submodule, the uncertainty of global annual CH4 emissions from topography inputs is estimated to be 29.0 Tg yr-1. This study demonstrates the feasibility of TOPMODEL to capture spatial heterogeneity of inundation at a large scale and highlights the significance of correcting maximum wetland extent to improve modeling of interannual variations in wetland area. It additionally highlights the importance of an adequate investigation of topographic indices for simulating global wetlands and shows the opportunity to converge wetland estimates across LSMs by identifying the uncertainty associated with existing wetland products.

  16. Simulation data for an estimation of the maximum theoretical value and confidence interval for the correlation coefficient.

    PubMed

    Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro

    2017-10-01

    The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.

  17. Bias and robustness of uncertainty components estimates in transient climate projections

    NASA Astrophysics Data System (ADS)

    Hingray, Benoit; Blanchet, Juliette; Jean-Philippe, Vidal

    2016-04-01

    A critical issue in climate change studies is the estimation of uncertainties in projections along with the contribution of the different uncertainty sources, including scenario uncertainty, the different components of model uncertainty and internal variability. Quantifying the different uncertainty sources faces actually different problems. For instance and for the sake of simplicity, an estimate of model uncertainty is classically obtained from the empirical variance of the climate responses obtained for the different modeling chains. These estimates are however biased. Another difficulty arises from the limited number of members that are classically available for most modeling chains. In this case, the climate response of one given chain and the effect of its internal variability may be actually difficult if not impossible to separate. The estimate of scenario uncertainty, model uncertainty and internal variability components are thus likely to be not really robust. We explore the importance of the bias and the robustness of the estimates for two classical Analysis of Variance (ANOVA) approaches: a Single Time approach (STANOVA), based on the only data available for the considered projection lead time and a time series based approach (QEANOVA), which assumes quasi-ergodicity of climate outputs over the whole available climate simulation period (Hingray and Saïd, 2014). We explore both issues for a simple but classical configuration where uncertainties in projections are composed of two single sources: model uncertainty and internal climate variability. The bias in model uncertainty estimates is explored from theoretical expressions of unbiased estimators developed for both ANOVA approaches. The robustness of uncertainty estimates is explored for multiple synthetic ensembles of time series projections generated with MonteCarlo simulations. For both ANOVA approaches, when the empirical variance of climate responses is used to estimate model uncertainty, the bias is always positive. It can be especially high with STANOVA. In the most critical configurations, when the number of members available for each modeling chain is small (< 3) and when internal variability explains most of total uncertainty variance (75% or more), the overestimation is higher than 100% of the true model uncertainty variance. The bias can be considerably reduced with a time series ANOVA approach, owing to the multiple time steps accounted for. The longer the transient time period used for the analysis, the larger the reduction. When a quasi-ergodic ANOVA approach is applied to decadal data for the whole 1980-2100 period, the bias is reduced by a factor 2.5 to 20 depending on the projection lead time. In all cases, the bias is likely to be not negligible for a large number of climate impact studies resulting in a likely large overestimation of the contribution of model uncertainty to total variance. For both approaches, the robustness of all uncertainty estimates is higher when more members are available, when internal variability is smaller and/or the response-to-uncertainty ratio is higher. QEANOVA estimates are much more robust than STANOVA ones: QEANOVA simulated confidence intervals are roughly 3 to 5 times smaller than STANOVA ones. Excepted for STANOVA when less than 3 members is available, the robustness is rather high for total uncertainty and moderate for internal variability estimates. For model uncertainty or response-to-uncertainty ratio estimates, the robustness is conversely low for QEANOVA to very low for STANOVA. In the most critical configurations (small number of member, large internal variability), large over- or underestimation of uncertainty components is very thus likely. To propose relevant uncertainty analyses and avoid misleading interpretations, estimates of uncertainty components should be therefore bias corrected and ideally come with estimates of their robustness. This work is part of the COMPLEX Project (European Collaborative Project FP7-ENV-2012 number: 308601; http://www.complex.ac.uk/). Hingray, B., Saïd, M., 2014. Partitioning internal variability and model uncertainty components in a multimodel multireplicate ensemble of climate projections. J.Climate. doi:10.1175/JCLI-D-13-00629.1 Hingray, B., Blanchet, J. (revision) Unbiased estimators for uncertainty components in transient climate projections. J. Climate Hingray, B., Blanchet, J., Vidal, J.P. (revision) Robustness of uncertainty components estimates in climate projections. J.Climate

  18. Estimation of multiple sound sources with data and model uncertainties using the EM and evidential EM algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Xun; Quost, Benjamin; Chazot, Jean-Daniel; Antoni, Jérôme

    2016-01-01

    This paper considers the problem of identifying multiple sound sources from acoustical measurements obtained by an array of microphones. The problem is solved via maximum likelihood. In particular, an expectation-maximization (EM) approach is used to estimate the sound source locations and strengths, the pressure measured by a microphone being interpreted as a mixture of latent signals emitted by the sources. This work also considers two kinds of uncertainties pervading the sound propagation and measurement process: uncertain microphone locations and uncertain wavenumber. These uncertainties are transposed to the data in the belief functions framework. Then, the source locations and strengths can be estimated using a variant of the EM algorithm, known as the Evidential EM (E2M) algorithm. Eventually, both simulation and real experiments are shown to illustrate the advantage of using the EM in the case without uncertainty and the E2M in the case of uncertain measurement.

  19. Calculation of Confidence Intervals for the Maximum Magnitude of Earthquakes in Different Seismotectonic Zones of Iran

    NASA Astrophysics Data System (ADS)

    Salamat, Mona; Zare, Mehdi; Holschneider, Matthias; Zöller, Gert

    2017-03-01

    The problem of estimating the maximum possible earthquake magnitude m_max has attracted growing attention in recent years. Due to sparse data, the role of uncertainties becomes crucial. In this work, we determine the uncertainties related to the maximum magnitude in terms of confidence intervals. Using an earthquake catalog of Iran, m_max is estimated for different predefined levels of confidence in six seismotectonic zones. Assuming the doubly truncated Gutenberg-Richter distribution as a statistical model for earthquake magnitudes, confidence intervals for the maximum possible magnitude of earthquakes are calculated in each zone. While the lower limit of the confidence interval is the magnitude of the maximum observed event,the upper limit is calculated from the catalog and the statistical model. For this aim, we use the original catalog which no declustering methods applied on as well as a declustered version of the catalog. Based on the study by Holschneider et al. (Bull Seismol Soc Am 101(4):1649-1659, 2011), the confidence interval for m_max is frequently unbounded, especially if high levels of confidence are required. In this case, no information is gained from the data. Therefore, we elaborate for which settings finite confidence levels are obtained. In this work, Iran is divided into six seismotectonic zones, namely Alborz, Azerbaijan, Zagros, Makran, Kopet Dagh, Central Iran. Although calculations of the confidence interval in Central Iran and Zagros seismotectonic zones are relatively acceptable for meaningful levels of confidence, results in Kopet Dagh, Alborz, Azerbaijan and Makran are not that much promising. The results indicate that estimating m_max from an earthquake catalog for reasonable levels of confidence alone is almost impossible.

  20. Probabilistic assessment of uncertain adaptive hybrid composites

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.

    1994-01-01

    Adaptive composite structures using actuation materials, such as piezoelectric fibers, were assessed probabilistically utilizing intraply hybrid composite mechanics in conjunction with probabilistic composite structural analysis. Uncertainties associated with the actuation material as well as the uncertainties in the regular (traditional) composite material properties were quantified and considered in the assessment. Static and buckling analyses were performed for rectangular panels with various boundary conditions and different control arrangements. The probability density functions of the structural behavior, such as maximum displacement and critical buckling load, were computationally simulated. The results of the assessment indicate that improved design and reliability can be achieved with actuation material.

  1. Markov Chain Monte Carlo Used in Parameter Inference of Magnetic Resonance Spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hock, Kiel; Earle, Keith

    2016-02-06

    In this paper, we use Boltzmann statistics and the maximum likelihood distribution derived from Bayes’ Theorem to infer parameter values for a Pake Doublet Spectrum, a lineshape of historical significance and contemporary relevance for determining distances between interacting magnetic dipoles. A Metropolis Hastings Markov Chain Monte Carlo algorithm is implemented and designed to find the optimum parameter set and to estimate parameter uncertainties. In conclusion, the posterior distribution allows us to define a metric on parameter space that induces a geometry with negative curvature that affects the parameter uncertainty estimates, particularly for spectra with low signal to noise.

  2. Probabilistic analysis of bladed turbine disks and the effect of mistuning

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Nagpal, V. K.; Chamis, Christos C.

    1990-01-01

    Probabilistic assessment of the maximum blade response on a mistuned rotor disk is performed using the computer code NESSUS. The uncertainties in natural frequency, excitation frequency, amplitude of excitation and damping are included to obtain the cumulative distribution function (CDF) of blade responses. Advanced mean value first order analysis is used to compute CDF. The sensitivities of different random variables are identified. Effect of the number of blades on a rotor on mistuning is evaluated. It is shown that the uncertainties associated with the forcing function parameters have significant effect on the response distribution of the bladed rotor.

  3. Probabilistic analysis of bladed turbine disks and the effect of mistuning

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin; Nagpal, V. K.; Chamis, C. C.

    1990-01-01

    Probabilistic assessment of the maximum blade response on a mistuned rotor disk is performed using the computer code NESSUS. The uncertainties in natural frequency, excitation frequency, amplitude of excitation and damping have been included to obtain the cumulative distribution function (CDF) of blade responses. Advanced mean value first order analysis is used to compute CDF. The sensitivities of different random variables are identified. Effect of the number of blades on a rotor on mistuning is evaluated. It is shown that the uncertainties associated with the forcing function parameters have significant effect on the response distribution of the bladed rotor.

  4. Uncertainty and sensitivity analysis of control strategies using the benchmark simulation model No1 (BSM1).

    PubMed

    Flores-Alsina, Xavier; Rodriguez-Roda, Ignasi; Sin, Gürkan; Gernaey, Krist V

    2009-01-01

    The objective of this paper is to perform an uncertainty and sensitivity analysis of the predictions of the Benchmark Simulation Model (BSM) No. 1, when comparing four activated sludge control strategies. The Monte Carlo simulation technique is used to evaluate the uncertainty in the BSM1 predictions, considering the ASM1 bio-kinetic parameters and influent fractions as input uncertainties while the Effluent Quality Index (EQI) and the Operating Cost Index (OCI) are focused on as model outputs. The resulting Monte Carlo simulations are presented using descriptive statistics indicating the degree of uncertainty in the predicted EQI and OCI. Next, the Standard Regression Coefficients (SRC) method is used for sensitivity analysis to identify which input parameters influence the uncertainty in the EQI predictions the most. The results show that control strategies including an ammonium (S(NH)) controller reduce uncertainty in both overall pollution removal and effluent total Kjeldahl nitrogen. Also, control strategies with an external carbon source reduce the effluent nitrate (S(NO)) uncertainty increasing both their economical cost and variability as a trade-off. Finally, the maximum specific autotrophic growth rate (micro(A)) causes most of the variance in the effluent for all the evaluated control strategies. The influence of denitrification related parameters, e.g. eta(g) (anoxic growth rate correction factor) and eta(h) (anoxic hydrolysis rate correction factor), becomes less important when a S(NO) controller manipulating an external carbon source addition is implemented.

  5. Estimation of sampling error uncertainties in observed surface air temperature change in China

    NASA Astrophysics Data System (ADS)

    Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun

    2017-08-01

    This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.

  6. Estimation of Surface Air Temperature Over Central and Eastern Eurasia from MODIS Land Surface Temperature

    NASA Technical Reports Server (NTRS)

    Shen, Suhung; Leptoukh, Gregory G.

    2011-01-01

    Surface air temperature (T(sub a)) is a critical variable in the energy and water cycle of the Earth.atmosphere system and is a key input element for hydrology and land surface models. This is a preliminary study to evaluate estimation of T(sub a) from satellite remotely sensed land surface temperature (T(sub s)) by using MODIS-Terra data over two Eurasia regions: northern China and fUSSR. High correlations are observed in both regions between station-measured T(sub a) and MODIS T(sub s). The relationships between the maximum T(sub a) and daytime T(sub s) depend significantly on land cover types, but the minimum T(sub a) and nighttime T(sub s) have little dependence on the land cover types. The largest difference between maximum T(sub a) and daytime T(sub s) appears over the barren and sparsely vegetated area during the summer time. Using a linear regression method, the daily maximum T(sub a) were estimated from 1 km resolution MODIS T(sub s) under clear-sky conditions with coefficients calculated based on land cover types, while the minimum T(sub a) were estimated without considering land cover types. The uncertainty, mean absolute error (MAE), of the estimated maximum T(sub a) varies from 2.4 C over closed shrublands to 3.2 C over grasslands, and the MAE of the estimated minimum Ta is about 3.0 C.

  7. Impact of interpatient variability on organ dose estimates according to MIRD schema: Uncertainty and variance-based sensitivity analysis.

    PubMed

    Zvereva, Alexandra; Kamp, Florian; Schlattl, Helmut; Zankl, Maria; Parodi, Katia

    2018-05-17

    Variance-based sensitivity analysis (SA) is described and applied to the radiation dosimetry model proposed by the Committee on Medical Internal Radiation Dose (MIRD) for the organ-level absorbed dose calculations in nuclear medicine. The uncertainties in the dose coefficients thus calculated are also evaluated. A Monte Carlo approach was used to compute first-order and total-effect SA indices, which rank the input factors according to their influence on the uncertainty in the output organ doses. These methods were applied to the radiopharmaceutical (S)-4-(3- 18 F-fluoropropyl)-L-glutamic acid ( 18 F-FSPG) as an example. Since 18 F-FSPG has 11 notable source regions, a 22-dimensional model was considered here, where 11 input factors are the time-integrated activity coefficients (TIACs) in the source regions and 11 input factors correspond to the sets of the specific absorbed fractions (SAFs) employed in the dose calculation. The SA was restricted to the foregoing 22 input factors. The distributions of the input factors were built based on TIACs of five individuals to whom the radiopharmaceutical 18 F-FSPG was administered and six anatomical models, representing two reference, two overweight, and two slim individuals. The self-absorption SAFs were mass-scaled to correspond to the reference organ masses. The estimated relative uncertainties were in the range 10%-30%, with a minimum and a maximum for absorbed dose coefficients for urinary bladder wall and heart wall, respectively. The applied global variance-based SA enabled us to identify the input factors that have the highest influence on the uncertainty in the organ doses. With the applied mass-scaling of the self-absorption SAFs, these factors included the TIACs for absorbed dose coefficients in the source regions and the SAFs from blood as source region for absorbed dose coefficients in highly vascularized target regions. For some combinations of proximal target and source regions, the corresponding cross-fire SAFs were found to have an impact. Global variance-based SA has been for the first time applied to the MIRD schema for internal dose calculation. Our findings suggest that uncertainties in computed organ doses can be substantially reduced by performing an accurate determination of TIACs in the source regions, accompanied by the estimation of individual source region masses along with the usage of an appropriate blood distribution in a patient's body and, in a few cases, the cross-fire SAFs from proximal source regions. © 2018 American Association of Physicists in Medicine.

  8. TU-AB-202-03: Prediction of PET Transfer Uncertainty by DIR Error Estimating Software, AUTODIRECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, H; Chen, J; Phillips, J

    2016-06-15

    Purpose: Deformable image registration (DIR) is a powerful tool, but DIR errors can adversely affect its clinical applications. To estimate voxel-specific DIR uncertainty, a software tool, called AUTODIRECT (automated DIR evaluation of confidence tool), has been developed and validated. This work tests the ability of this software to predict uncertainty for the transfer of standard uptake values (SUV) from positron-emission tomography (PET) with DIR. Methods: Virtual phantoms are used for this study. Each phantom has a planning computed tomography (CT) image and a diagnostic PET-CT image set. A deformation was digitally applied to the diagnostic CT to create the planningmore » CT image and establish a known deformation between the images. One lung and three rectum patient datasets were employed to create the virtual phantoms. Both of these sites have difficult deformation scenarios associated with them, which can affect DIR accuracy (lung tissue sliding and changes in rectal filling). The virtual phantoms were created to simulate these scenarios by introducing discontinuities in the deformation field at the lung rectum border. The DIR algorithm from Plastimatch software was applied to these phantoms. The SUV mapping errors from the DIR were then compared to that predicted by AUTODIRECT. Results: The SUV error distributions closely followed the AUTODIRECT predicted error distribution for the 4 test cases. The minimum and maximum PET SUVs were produced from AUTODIRECT at 95% confidence interval before applying gradient-based SUV segmentation for each of these volumes. Notably, 93.5% of the target volume warped by the true deformation was included within the AUTODIRECT-predicted maximum SUV volume after the segmentation, while 78.9% of the target volume was within the target volume warped by Plastimatch. Conclusion: The AUTODIRECT framework is able to predict PET transfer uncertainty caused by DIR, which enables an understanding of the associated target volume uncertainty.« less

  9. The Sonic Altimeter for Aircraft

    NASA Technical Reports Server (NTRS)

    Draper, C S

    1937-01-01

    Discussed here are results already achieved with sonic altimeters in light of the theoretical possibilities of such instruments. From the information gained in this investigation, a procedure is outlined to determine whether or not a further development program is justified by the value of the sonic altimeter as an aircraft instrument. The information available in the literature is reviewed and condensed into a summary of sonic altimeter developments. Various methods of receiving the echo and timing the interval between the signal and the echo are considered. A theoretical discussion is given of sonic altimeter errors due to uncertainties in timing, variations in sound velocity, aircraft speed, location of the sending and receiving units, and inclinations of the flight path with respect to the ground surface. Plots are included which summarize the results in each case. An analysis is given of the effect of an inclined flight path on the frequency of the echo. A brief study of the acoustical phases of the sonic altimeter problem is carried through. The results of this analysis are used to predict approximately the maximum operating altitudes of a reasonably designed sonic altimeter under very good and very bad conditions. A final comparison is made between the estimated and experimental maximum operating altitudes which shows good agreement where quantitative information is available.

  10. Gully erosion in the Caatinga biome, Brazil: measurement and stochastic modelling

    NASA Astrophysics Data System (ADS)

    Lima Alencar, Pedro Henrique; de Araújo, José Carlos; Nonato Távora Costa, Raimundo

    2017-04-01

    In contrast with inter-rill erosion, which takes a long time to modify the terrain form, gully erosion can fast and severely change the landscape. In the Brazilian semiarid region, a one-million km2 area that coincides with the Caatinga biome, inter-rill erosion prevails due to the silty shallow soils. However, gully erosion does occur in the Caatinga, with temporal increasing severity. This source of sediment impacts the existing dense network of small dams, generating significant deleterious effects, such as water availability reduction in a drought-prone region. This study focuses on the Madalena basin (124 km2, state of Ceará, Brazil), a land-reform settlement with 20 inhabitants per km2, whose main economic activities are agriculture (especially Zea mays), livestock and fishing. In the catchment area, where there are 12 dams (with storage capacity ranging from 6.104 to 2.107 m3), gully erosion has become an issue due to its increasing occurrence. Eight gully-erosion sites have been identified in the basin, but most of them have not yet reached great dimensions (depth and/or width), nor interacted with groundwater, being therefore classified as ephemeral gullies. We selected the three most relevant sites and measured the topography of the eroded channels, as well as the neighboring terrain relief, using accurate total stations and unmanned aerial vehicle. The data was processed with the help of software, such as DataGeosis (Office 7.5) and Surfer (11.0), providing information on gully erosion in terms of (μ ± σ): projection area (317±165 m2), eroded mass (61±36 Mg) and volume (42±25 m3), length (38±6 m), maximum depth (0.58±0.13 m) and maximum width (6.00±2.35 m). The measured data are then compared with those provided by the Foster and Lane model (1986). The model generated results with considerable scatter. This is possibly due to uncertainties in the field parameters, which are neglected in the deterministic approach of most physically-based models. We propose that the gully-erosion model approach consider the uncertainties of its main parameters/variables (e.g., soil density, soil grain-size distribution and peak discharge); and generate a histogram of responses, rather than a single deterministic value. The principle of maximum entropy should be used to derive the probability density functions of the uncertainty content of parameters and variables.

  11. CHURCHILL COUNTY, NEVADA ARSENIC STUDY: WATER CONSUMPTION AND EXPOSURE BIOMARKERS

    EPA Science Inventory

    The US Environmental Protection Agency is required to reevaluate the Maximum Contaminant Level (MCL) for arsenic in 2006. To provide data for reducing uncertainties in assessing health risks associated with exposure to low levels (<200 g/l) of arsenic, a large scale biomarker st...

  12. Impact of Parameter Uncertainty Assessment of Critical SWAT Output Simulations

    USDA-ARS?s Scientific Manuscript database

    Watershed models are increasingly being utilized to evaluate alternate management scenarios for improving water quality. The concern for using these tools in extensive programs such as the National Total Maximum Daily Load (TMDL) program is that the certainty of model results and efficacy of managem...

  13. Uncertainty estimation of Intensity-Duration-Frequency relationships: A regional analysis

    NASA Astrophysics Data System (ADS)

    Mélèse, Victor; Blanchet, Juliette; Molinié, Gilles

    2018-03-01

    We propose in this article a regional study of uncertainties in IDF curves derived from point-rainfall maxima. We develop two generalized extreme value models based on the simple scaling assumption, first in the frequentist framework and second in the Bayesian framework. Within the frequentist framework, uncertainties are obtained i) from the Gaussian density stemming from the asymptotic normality theorem of the maximum likelihood and ii) with a bootstrap procedure. Within the Bayesian framework, uncertainties are obtained from the posterior densities. We confront these two frameworks on the same database covering a large region of 100, 000 km2 in southern France with contrasted rainfall regime, in order to be able to draw conclusion that are not specific to the data. The two frameworks are applied to 405 hourly stations with data back to the 1980's, accumulated in the range 3 h-120 h. We show that i) the Bayesian framework is more robust than the frequentist one to the starting point of the estimation procedure, ii) the posterior and the bootstrap densities are able to better adjust uncertainty estimation to the data than the Gaussian density, and iii) the bootstrap density give unreasonable confidence intervals, in particular for return levels associated to large return period. Therefore our recommendation goes towards the use of the Bayesian framework to compute uncertainty.

  14. Uncertainty analysis of the nonideal competitive adsorption-donnan model: effects of dissolved organic matter variability on predicted metal speciation in soil solution.

    PubMed

    Groenenberg, Jan E; Koopmans, Gerwin F; Comans, Rob N J

    2010-02-15

    Ion binding models such as the nonideal competitive adsorption-Donnan model (NICA-Donnan) and model VI successfully describe laboratory data of proton and metal binding to purified humic substances (HS). In this study model performance was tested in more complex natural systems. The speciation predicted with the NICA-Donnan model and the associated uncertainty were compared with independent measurements in soil solution extracts, including the free metal ion activity and fulvic (FA) and humic acid (HA) fractions of dissolved organic matter (DOM). Potentially important sources of uncertainty are the DOM composition and the variation in binding properties of HS. HS fractions of DOM in soil solution extracts varied between 14 and 63% and consisted mainly of FA. Moreover, binding parameters optimized for individual FA samples show substantial variation. Monte Carlo simulations show that uncertainties in predicted metal speciation, for metals with a high affinity for FA (Cu, Pb), are largely due to the natural variation in binding properties (i.e., the affinity) of FA. Predictions for metals with a lower affinity (Cd) are more prone to uncertainties in the fraction FA in DOM and the maximum site density (i.e., the capacity) of the FA. Based on these findings, suggestions are provided to reduce uncertainties in model predictions.

  15. Evaluation of Dose Uncertainty to the Target Associated With Real-Time Tracking Intensity-Modulated Radiation Therapy Using the CyberKnife Synchrony System.

    PubMed

    Iwata, Hiromitsu; Inoue, Mitsuhiro; Shiomi, Hiroya; Murai, Taro; Tatewaki, Koshi; Ohta, Seiji; Okawa, Kohei; Yokota, Naoki; Shibamoto, Yuta

    2016-02-01

    We investigated the dose uncertainty caused by errors in real-time tracking intensity-modulated radiation therapy (IMRT) using the CyberKnife Synchrony Respiratory Tracking System (SRTS). Twenty lung tumors that had been treated with non-IMRT real-time tracking using CyberKnife SRTS were used for this study. After validating the tracking error in each case, we did 40 IMRT planning using 8 different collimator sizes for the 20 patients. The collimator size was determined for each planning target volume (PTV); smaller ones were one-half, and larger ones three-quarters, of the PTV diameter. The planned dose was 45 Gy in 4 fractions prescribed at 95% volume border of the PTV. Thereafter, the tracking error in each case was substituted into calculation software developed in house and randomly added in the setting of each beam. The IMRT planning incorporating tracking errors was simulated 1000 times, and various dose data on the clinical target volume (CTV) were compared with the original data. The same simulation was carried out by changing the fraction number from 1 to 6 in each IMRT plan. Finally, a total of 240 000 plans were analyzed. With 4 fractions, the change in the CTV maximum and minimum doses was within 3.0% (median) for each collimator. The change in D99 and D95 was within 2.0%. With decreases in the fraction number, the CTV coverage rate and the minimum dose decreased and varied greatly. The accuracy of real-time tracking IMRT delivered in 4 fractions using CyberKnife SRTS was considered to be clinically acceptable. © The Author(s) 2014.

  16. HYDROSCAPE: A SCAlable and ParallelizablE Rainfall Runoff Model for Hydrological Applications

    NASA Astrophysics Data System (ADS)

    Piccolroaz, S.; Di Lazzaro, M.; Zarlenga, A.; Majone, B.; Bellin, A.; Fiori, A.

    2015-12-01

    In this work we present HYDROSCAPE, an innovative streamflow routing method based on the travel time approach, and modeled through a fine-scale geomorphological description of hydrological flow paths. The model is designed aimed at being easily coupled with weather forecast or climate models providing the hydrological forcing, and at the same time preserving the geomorphological dispersion of the river network, which is kept unchanged independently on the grid size of rainfall input. This makes HYDROSCAPE particularly suitable for multi-scale applications, ranging from medium size catchments up to the continental scale, and to investigate the effects of extreme rainfall events that require an accurate description of basin response timing. Key feature of the model is its computational efficiency, which allows performing a large number of simulations for sensitivity/uncertainty analyses in a Monte Carlo framework. Further, the model is highly parsimonious, involving the calibration of only three parameters: one defining the residence time of hillslope response, one for channel velocity, and a multiplicative factor accounting for uncertainties in the identification of the potential maximum soil moisture retention in the SCS-CN method. HYDROSCAPE is designed with a simple and flexible modular structure, which makes it particularly prone to massive parallelization, customization according to the specific user needs and preferences (e.g., rainfall-runoff model), and continuous development and improvement. Finally, the possibility to specify the desired computational time step and evaluate streamflow at any location in the domain, makes HYDROSCAPE an attractive tool for many hydrological applications, and a valuable alternative to more complex and highly parametrized large scale hydrological models. Together with model development and features, we present an application to the Upper Tiber River basin (Italy), providing a practical example of model performance and characteristics.

  17. Probabilistic tsunami hazard analysis: Multiple sources and global applications

    USGS Publications Warehouse

    Grezio, Anita; Babeyko, Andrey; Baptista, Maria Ana; Behrens, Jörn; Costa, Antonio; Davies, Gareth; Geist, Eric L.; Glimsdal, Sylfest; González, Frank I.; Griffin, Jonathan; Harbitz, Carl B.; LeVeque, Randall J.; Lorito, Stefano; Løvholt, Finn; Omira, Rachid; Mueller, Christof; Paris, Raphaël; Parsons, Thomas E.; Polet, Jascha; Power, William; Selva, Jacopo; Sørensen, Mathilde B.; Thio, Hong Kie

    2017-01-01

    Applying probabilistic methods to infrequent but devastating natural events is intrinsically challenging. For tsunami analyses, a suite of geophysical assessments should be in principle evaluated because of the different causes generating tsunamis (earthquakes, landslides, volcanic activity, meteorological events, and asteroid impacts) with varying mean recurrence rates. Probabilistic Tsunami Hazard Analyses (PTHAs) are conducted in different areas of the world at global, regional, and local scales with the aim of understanding tsunami hazard to inform tsunami risk reduction activities. PTHAs enhance knowledge of the potential tsunamigenic threat by estimating the probability of exceeding specific levels of tsunami intensity metrics (e.g., run-up or maximum inundation heights) within a certain period of time (exposure time) at given locations (target sites); these estimates can be summarized in hazard maps or hazard curves. This discussion presents a broad overview of PTHA, including (i) sources and mechanisms of tsunami generation, emphasizing the variety and complexity of the tsunami sources and their generation mechanisms, (ii) developments in modeling the propagation and impact of tsunami waves, and (iii) statistical procedures for tsunami hazard estimates that include the associated epistemic and aleatoric uncertainties. Key elements in understanding the potential tsunami hazard are discussed, in light of the rapid development of PTHA methods during the last decade and the globally distributed applications, including the importance of considering multiple sources, their relative intensities, probabilities of occurrence, and uncertainties in an integrated and consistent probabilistic framework.

  18. Probabilistic Tsunami Hazard Analysis: Multiple Sources and Global Applications

    NASA Astrophysics Data System (ADS)

    Grezio, Anita; Babeyko, Andrey; Baptista, Maria Ana; Behrens, Jörn; Costa, Antonio; Davies, Gareth; Geist, Eric L.; Glimsdal, Sylfest; González, Frank I.; Griffin, Jonathan; Harbitz, Carl B.; LeVeque, Randall J.; Lorito, Stefano; Løvholt, Finn; Omira, Rachid; Mueller, Christof; Paris, Raphaël.; Parsons, Tom; Polet, Jascha; Power, William; Selva, Jacopo; Sørensen, Mathilde B.; Thio, Hong Kie

    2017-12-01

    Applying probabilistic methods to infrequent but devastating natural events is intrinsically challenging. For tsunami analyses, a suite of geophysical assessments should be in principle evaluated because of the different causes generating tsunamis (earthquakes, landslides, volcanic activity, meteorological events, and asteroid impacts) with varying mean recurrence rates. Probabilistic Tsunami Hazard Analyses (PTHAs) are conducted in different areas of the world at global, regional, and local scales with the aim of understanding tsunami hazard to inform tsunami risk reduction activities. PTHAs enhance knowledge of the potential tsunamigenic threat by estimating the probability of exceeding specific levels of tsunami intensity metrics (e.g., run-up or maximum inundation heights) within a certain period of time (exposure time) at given locations (target sites); these estimates can be summarized in hazard maps or hazard curves. This discussion presents a broad overview of PTHA, including (i) sources and mechanisms of tsunami generation, emphasizing the variety and complexity of the tsunami sources and their generation mechanisms, (ii) developments in modeling the propagation and impact of tsunami waves, and (iii) statistical procedures for tsunami hazard estimates that include the associated epistemic and aleatoric uncertainties. Key elements in understanding the potential tsunami hazard are discussed, in light of the rapid development of PTHA methods during the last decade and the globally distributed applications, including the importance of considering multiple sources, their relative intensities, probabilities of occurrence, and uncertainties in an integrated and consistent probabilistic framework.

  19. Observed increase in extreme daily rainfall in the French Mediterranean

    NASA Astrophysics Data System (ADS)

    Ribes, Aurélien; Thao, Soulivanh; Vautard, Robert; Dubuisson, Brigitte; Somot, Samuel; Colin, Jeanne; Planton, Serge; Soubeyroux, Jean-Michel

    2018-04-01

    We examine long-term trends in the historical record of extreme precipitation events occurring over the French Mediterranean area. Extreme events are considered in terms of their intensity, frequency, extent and precipitated volume. Changes in intensity are analysed via an original statistical approach where the annual maximum rainfall amounts observed at each measurement station are aggregated into a univariate time-series according to their dependence. The mean intensity increase is significant and estimated at + 22% (+ 7 to + 39% at the 90% confidence level) over the 1961-2015 period. Given the observed warming over the considered area, this increase is consistent with a rate of about one to three times that implied by the Clausius-Clapeyron relationship. Changes in frequency and other spatial features are investigated through a Generalised Linear Model. Changes in frequency for events exceeding high thresholds (about 200 mm in 1 day) are found to be significant, typically near a doubling of the frequency, but with large uncertainties in this change ratio. The area affected by severe events and the water volume precipitated during those events also exhibit significant trends, with an increase by a factor of about 4 for a 200 mm threshold, again with large uncertainties. All diagnoses consistently point toward an intensification of the most extreme events over the last decades. We argue that it is difficult to explain the diagnosed trends without invoking the human influence on climate.

  20. On Fitting a Multivariate Two-Part Latent Growth Model

    PubMed Central

    Xu, Shu; Blozis, Shelley A.; Vandewater, Elizabeth A.

    2017-01-01

    A 2-part latent growth model can be used to analyze semicontinuous data to simultaneously study change in the probability that an individual engages in a behavior, and if engaged, change in the behavior. This article uses a Monte Carlo (MC) integration algorithm to study the interrelationships between the growth factors of 2 variables measured longitudinally where each variable can follow a 2-part latent growth model. A SAS macro implementing Mplus is developed to estimate the model to take into account the sampling uncertainty of this simulation-based computational approach. A sample of time-use data is used to show how maximum likelihood estimates can be obtained using a rectangular numerical integration method and an MC integration method. PMID:29333054

  1. On a full Bayesian inference for force reconstruction problems

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    In a previous paper, the authors introduced a flexible methodology for reconstructing mechanical sources in the frequency domain from prior local information on both their nature and location over a linear and time invariant structure. The proposed approach was derived from Bayesian statistics, because of its ability in mathematically accounting for experimenter's prior knowledge. However, since only the Maximum a Posteriori estimate was computed, the posterior uncertainty about the regularized solution given the measured vibration field, the mechanical model and the regularization parameter was not assessed. To answer this legitimate question, this paper fully exploits the Bayesian framework to provide, from a Markov Chain Monte Carlo algorithm, credible intervals and other statistical measures (mean, median, mode) for all the parameters of the force reconstruction problem.

  2. Performance of Trajectory Models with Wind Uncertainty

    NASA Technical Reports Server (NTRS)

    Lee, Alan G.; Weygandt, Stephen S.; Schwartz, Barry; Murphy, James R.

    2009-01-01

    Typical aircraft trajectory predictors use wind forecasts but do not account for the forecast uncertainty. A method for generating estimates of wind prediction uncertainty is described and its effect on aircraft trajectory prediction uncertainty is investigated. The procedure for estimating the wind prediction uncertainty relies uses a time-lagged ensemble of weather model forecasts from the hourly updated Rapid Update Cycle (RUC) weather prediction system. Forecast uncertainty is estimated using measures of the spread amongst various RUC time-lagged ensemble forecasts. This proof of concept study illustrates the estimated uncertainty and the actual wind errors, and documents the validity of the assumed ensemble-forecast accuracy relationship. Aircraft trajectory predictions are made using RUC winds with provision for the estimated uncertainty. Results for a set of simulated flights indicate this simple approach effectively translates the wind uncertainty estimate into an aircraft trajectory uncertainty. A key strength of the method is the ability to relate uncertainty to specific weather phenomena (contained in the various ensemble members) allowing identification of regional variations in uncertainty.

  3. Project Scheduling Based on Risk of Gas Transmission Pipe

    NASA Astrophysics Data System (ADS)

    Silvianita; Nurbaity, A.; Mulyadi, Y.; Suntoyo; Chamelia, D. M.

    2018-03-01

    The planning of a project has a time limit on which must be completed before or right at a predetermined time. Thus, in a project planning, it is necessary to have scheduling management that is useful for completing a project to achieve maximum results by considering the constraints that will exists. Scheduling management is undertaken to deal with uncertainties and negative impacts of time and cost in project completion. This paper explains about scheduling management in gas transmission pipeline project Gresik-Semarang to find out which scheduling plan is most effectively used in accordance with its risk value. Scheduling management in this paper is assissted by Microsoft Project software to find the critical path of existing project scheduling planning data. Critical path is the longest scheduling path with the fastest completion time. The result is found a critical path on project scheduling with completion time is 152 days. Furthermore, the calculation of risk is done by using House of Risk (HOR) method and it is found that the critical path has a share of 40.98 percent of all causes of the occurence of risk events that will be experienced.

  4. Orbits of Selected Globular Clusters in the Galactic Bulge

    NASA Astrophysics Data System (ADS)

    Pérez-Villegas, A.; Rossi, L.; Ortolani, S.; Casotto, S.; Barbuy, B.; Bica, E.

    2018-05-01

    We present orbit analysis for a sample of eight inner bulge globular clusters, together with one reference halo object. We used proper motion values derived from long time base CCD data. Orbits are integrated in both an axisymmetric model and a model including the Galactic bar potential. The inclusion of the bar proved to be essential for the description of the dynamical behaviour of the clusters. We use the Monte Carlo scheme to construct the initial conditions for each cluster, taking into account the uncertainties in the kinematical data and distances. The sample clusters show typically maximum height to the Galactic plane below 1.5 kpc, and develop rather eccentric orbits. Seven of the bulge sample clusters share the orbital properties of the bar/bulge, having perigalactic and apogalatic distances, and maximum vertical excursion from the Galactic plane inside the bar region. NGC 6540 instead shows a completely different orbital behaviour, having a dynamical signature of the thick disc. Both prograde and prograde-retrograde orbits with respect to the direction of the Galactic rotation were revealed, which might characterise a chaotic behaviour.

  5. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  6. Quantifying the uncertainty in heritability.

    PubMed

    Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

    2014-05-01

    The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.

  7. Sea-level projections representing the deeply uncertain contribution of the West Antarctic ice sheet.

    PubMed

    Bakker, Alexander M R; Wong, Tony E; Ruckert, Kelsey L; Keller, Klaus

    2017-06-20

    There is a growing awareness that uncertainties surrounding future sea-level projections may be much larger than typically perceived. Recently published projections appear widely divergent and highly sensitive to non-trivial model choices . Moreover, the West Antarctic ice sheet (WAIS) may be much less stable than previous believed, enabling a rapid disintegration. Here, we present a set of probabilistic sea-level projections that approximates the deeply uncertain WAIS contributions. The projections aim to inform robust decisions by clarifying the sensitivity to non-trivial or controversial assumptions. We show that the deeply uncertain WAIS contribution can dominate other uncertainties within decades. These deep uncertainties call for the development of robust adaptive strategies. These decision-making needs, in turn, require mission-oriented basic science, for example about potential signposts and the maximum rate of WAIS-induced sea-level changes.

  8. Fast automated analysis of strong gravitational lenses with convolutional neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.

    Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less

  9. Fast automated analysis of strong gravitational lenses with convolutional neural networks

    DOE PAGES

    Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.

    2017-08-30

    Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less

  10. Inferences on mass composition and tests of hadronic interactions from 0.3 to 100 EeV using the water-Cherenkov detectors of the Pierre Auger Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aab, A.; Abreu, P.; Aglietta, M.

    We present a new method for probing the hadronic interaction models at ultra-high energy and extracting details about mass composition. This is done using the time profiles of the signals recorded with the water-Cherenkov detectors of the Pierre Auger Observatory. The profiles arise from a mix of the muon and electromagnetic components of air-showers. Using the risetimes of the recorded signals we define a new parameter, which we use to compare our observations with predictions from simulations. We find, firstly, inconsistencies between our data and predictions over a greater energy range and with substantially more events than in previous studies.more » Secondly, by calibrating the new parameter with fluorescence measurements from observations made at the Auger Observatory, we can infer the depth of shower maximum for a sample of over 81,000 events extending from 0.3 EeV to over 100 EeV. Above 30 EeV, the sample is nearly fourteen times larger than currently available from fluorescence measurements and extending the covered energy range by half a decade. The energy dependence of the average depth of shower maximum is compared to simulations and interpreted in terms of the mean of the logarithmic mass. Here, we find good agreement with previous work and extend the measurement of the mean depth of shower maximum to greater energies than before, reducing significantly the statistical uncertainty associated with the inferences about mass composition.« less

  11. Fast automated analysis of strong gravitational lenses with convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.

    2017-08-01

    Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.

  12. Inferences on mass composition and tests of hadronic interactions from 0.3 to 100 EeV using the water-Cherenkov detectors of the Pierre Auger Observatory

    DOE PAGES

    Aab, A.; Abreu, P.; Aglietta, M.; ...

    2017-12-08

    We present a new method for probing the hadronic interaction models at ultra-high energy and extracting details about mass composition. This is done using the time profiles of the signals recorded with the water-Cherenkov detectors of the Pierre Auger Observatory. The profiles arise from a mix of the muon and electromagnetic components of air-showers. Using the risetimes of the recorded signals we define a new parameter, which we use to compare our observations with predictions from simulations. We find, firstly, inconsistencies between our data and predictions over a greater energy range and with substantially more events than in previous studies.more » Secondly, by calibrating the new parameter with fluorescence measurements from observations made at the Auger Observatory, we can infer the depth of shower maximum for a sample of over 81,000 events extending from 0.3 EeV to over 100 EeV. Above 30 EeV, the sample is nearly fourteen times larger than currently available from fluorescence measurements and extending the covered energy range by half a decade. The energy dependence of the average depth of shower maximum is compared to simulations and interpreted in terms of the mean of the logarithmic mass. Here, we find good agreement with previous work and extend the measurement of the mean depth of shower maximum to greater energies than before, reducing significantly the statistical uncertainty associated with the inferences about mass composition.« less

  13. Assessing patient-centered communication in a family practice setting: how do we measure it, and whose opinion matters?

    PubMed

    Clayton, Margaret F; Latimer, Seth; Dunn, Todd W; Haas, Leonard

    2011-09-01

    This study evaluated variables thought to influence patient's perceptions of patient-centeredness. We also compared results from two coding schemes that purport to evaluate patient-centeredness, the Measure of Patient-Centered Communication (MPCC) and the 4 Habits Coding Scheme (4HCS). 174 videotaped family practice office visits, and patient self-report measures were analyzed. Patient factors contributing to positive perceptions of patient-centeredness were successful negotiation of decision-making roles and lower post-visit uncertainty. MPCC coding found visits were on average 59% patient-centered (range 12-85%). 4HCS coding showed an average of 83 points (maximum possible 115). However, patients felt their visits were highly patient-centered (mean 3.7, range 1.9-4; maximum possible 4). There was a weak correlation between coding schemes, but no association between coding results and patient variables (number of pre-visit concerns, attainment of desired decision-making role, post-visit uncertainty, patients' perception of patient-centeredness). Coder inter-rater reliability was lower than expected; convergent and divergent validity were not supported. The 4HCS and MPCC operationalize patient-centeredness differently, illustrating a lack of conceptual clarity. The patient's perspective is important. Family practice providers can facilitate a more positive patient perception of patient-centeredness by addressing patient concerns to help reduce patient uncertainty, and by negotiating decision-making roles. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  14. The last glaciation of Bear Peninsula, central Amundsen Sea Embayment of Antarctica: Constraints on timing and duration revealed by in situ cosmogenic 14C and 10Be dating

    NASA Astrophysics Data System (ADS)

    Johnson, Joanne S.; Smith, James A.; Schaefer, Joerg M.; Young, Nicolás E.; Goehring, Brent M.; Hillenbrand, Claus-Dieter; Lamp, Jennifer L.; Finkel, Robert C.; Gohl, Karsten

    2017-12-01

    Ice streams in the Pine Island-Thwaites region of West Antarctica currently dominate contributions to sea level rise from the Antarctic ice sheet. Predictions of future ice-mass loss from this area rely on physical models that are validated with geological constraints on past extent, thickness and timing of ice cover. However, terrestrial records of ice sheet history from the region remain sparse, resulting in significant model uncertainties. We report glacial-geological evidence for the duration and timing of the last glaciation of Hunt Bluff, in the central Amundsen Sea Embayment. A multi-nuclide approach was used, measuring cosmogenic 10Be and in situ14C in bedrock surfaces and a perched erratic cobble. Bedrock 10Be ages (118-144 ka) reflect multiple periods of exposure and ice-cover, not continuous exposure since the last interglacial as had previously been hypothesized. In situ14C dating suggests that the last glaciation of Hunt Bluff did not start until 21.1 ± 5.8 ka - probably during the Last Glacial Maximum - and finished by 9.6 ± 0.9 ka, at the same time as ice sheet retreat from the continental shelf was complete. Thickening of ice at Hunt Bluff most likely post-dated the maximum extent of grounded ice on the outer continental shelf. Flow re-organisation provides a possible explanation for this, with the date for onset of ice-cover at Hunt Bluff providing a minimum age for the timing of convergence of the Dotson and Getz tributaries to form a single palaeo-ice stream. This is the first time that timing of onset of ice cover has been constrained in the Amundsen Sea Embayment.

  15. Calibration of 3-D wind measurements on a single engine research aircraft

    NASA Astrophysics Data System (ADS)

    Mallaun, C.; Giez, A.; Baumann, R.

    2015-02-01

    An innovative calibration method for the wind speed measurement using a boom mounted Rosemount model 858 AJ air velocity probe is introduced. The method is demonstrated for a sensor system installed on a medium size research aircraft which is used for measurements in the atmospheric boundary layer. The method encounters a series of coordinated flight manoeuvres to directly estimate the aerodynamic influences on the probe and to calculate the measurement uncertainties. The introduction of a differential Global Positioning System (DGPS) combined with a high accuracy Inertial Reference System (IRS) has brought major advances to airborne measurement techniques. The exact determination of geometrical height allows the use of the pressure signal as an independent parameter. Furthermore, the exact height information and the stepwise calibration process lead to maximum accuracy. The results show a measurement uncertainty for the aerodynamic influence of the dynamic and static pressures of 0.1 hPa. The applied parametrisation does not require any height dependencies or time shifts. After extensive flight tests a correction for the flow angles (attack and sideslip angles) was found, which is necessary for a successful wind calculation. A new method is demonstrated to correct for the aerodynamic influence on the sideslip angle. For the 3-D wind vector (with 100 Hz resolution) a novel error propagation scheme is tested, which determines the measurement uncertainties to be 0.3 m s-1 for the horizontal and 0.2 m s-1 for the vertical wind components.

  16. Calibration of 3-D wind measurements on a single-engine research aircraft

    NASA Astrophysics Data System (ADS)

    Mallaun, C.; Giez, A.; Baumann, R.

    2015-08-01

    An innovative calibration method for the wind speed measurement using a boom-mounted Rosemount model 858 AJ air velocity probe is introduced. The method is demonstrated for a sensor system installed on a medium-size research aircraft which is used for measurements in the atmospheric boundary layer. The method encounters a series of coordinated flight manoeuvres to directly estimate the aerodynamic influences on the probe and to calculate the measurement uncertainties. The introduction of a differential Global Positioning System (DGPS) combined with a high-accuracy inertial reference system (IRS) has brought major advances to airborne measurement techniques. The exact determination of geometrical height allows the use of the pressure signal as an independent parameter. Furthermore, the exact height information and the stepwise calibration process lead to maximum accuracy. The results show a measurement uncertainty for the aerodynamic influence of the dynamic and static pressures of 0.1 hPa. The applied parametrisation does not require any height dependencies or time shifts. After extensive flight tests a correction for the flow angles (attack and sideslip angles) was found, which is necessary for a successful wind calculation. A new method is demonstrated to correct for the aerodynamic influence on the sideslip angle. For the three-dimensional (3-D) wind vector (with 100 Hz resolution) a novel error propagation scheme is tested, which determines the measurement uncertainties to be 0.3 m s-1 for the horizontal and 0.2 m s-1 for the vertical wind components.

  17. Monte Carlo calibration of avalanches described as Coulomb fluid flows.

    PubMed

    Ancey, Christophe

    2005-07-15

    The idea that snow avalanches might behave as granular flows, and thus be described as Coulomb fluid flows, came up very early in the scientific study of avalanches, but it is not until recently that field evidence has been provided that demonstrates the reliability of this idea. This paper aims to specify the bulk frictional behaviour of snow avalanches by seeking a universal friction law. Since the bulk friction coefficient cannot be measured directly in the field, the friction coefficient must be calibrated by adjusting the model outputs to closely match the recorded data. Field data are readily available but are of poor quality and accuracy. We used Bayesian inference techniques to specify the model uncertainty relative to data uncertainty and to robustly and efficiently solve the inverse problem. A sample of 173 events taken from seven paths in the French Alps was used. The first analysis showed that the friction coefficient behaved as a random variable with a smooth and bell-shaped empirical distribution function. Evidence was provided that the friction coefficient varied with the avalanche volume, but any attempt to adjust a one-to-one relationship relating friction to volume produced residual errors that could be as large as three times the maximum uncertainty of field data. A tentative universal friction law is proposed: the friction coefficient is a random variable, the distribution of which can be approximated by a normal distribution with a volume-dependent mean.

  18. Transient traceability analysis of land carbon storage dynamics: procedures and its application to two forest ecosystems

    NASA Astrophysics Data System (ADS)

    Jiang, L.; Shi, Z.; Xia, J.; Liang, J.; Lu, X.; Wang, Y.; Luo, Y.

    2017-12-01

    Uptake of anthropogenically emitted carbon (C) dioxide by terrestrial ecosystem is critical for determining future climate. However, Earth system models project large uncertainties in future C storage. To help identify sources of uncertainties in model predictions, this study develops a transient traceability framework to trace components of C storage dynamics. Transient C storage (X) can be decomposed into two components, C storage capacity (Xc) and C storage potential (Xp). Xc is the maximum C amount that an ecosystem can potentially store and Xp represents the internal capacity of an ecosystem to equilibrate C input and output for a network of pools. Xc is co-determined by net primary production (NPP) and residence time (𝜏N), with the latter being determined by allocation coefficients, transfer coefficients, environmental scalar, and exit rate. Xp is the product of redistribution matrix (𝜏ch) and net ecosystem exchange. We applied this framework to two contrasting ecosystems, Duke Forest and Harvard Forest with an ecosystem model. This framework helps identify the mechanisms underlying the responses of carbon cycling in the two forests to climate change. The temporal trajectories of X are similar between the two ecosystems. Using this framework, we found that two different mechanisms leading to the similar trajectory. This framework has potential to reveal mechanisms behind transient C storage in response to various global change factors. It can also identify sources of uncertainties in predicted transient C storage across models and can therefore be useful for model intercomparison.

  19. Exploiting Measurement Uncertainty Estimation in Evaluation of GOES-R ABI Image Navigation Accuracy Using Image Registration Techniques

    NASA Technical Reports Server (NTRS)

    Haas, Evan; DeLuccia, Frank

    2016-01-01

    In evaluating GOES-R Advanced Baseline Imager (ABI) image navigation quality, upsampled sub-images of ABI images are translated against downsampled Landsat 8 images of localized, high contrast earth scenes to determine the translations in the East-West and North-South directions that provide maximum correlation. The native Landsat resolution is much finer than that of ABI, and Landsat navigation accuracy is much better than ABI required navigation accuracy and expected performance. Therefore, Landsat images are considered to provide ground truth for comparison with ABI images, and the translations of ABI sub-images that produce maximum correlation with Landsat localized images are interpreted as ABI navigation errors. The measured local navigation errors from registration of numerous sub-images with the Landsat images are averaged to provide a statistically reliable measurement of the overall navigation error of the ABI image. The dispersion of the local navigation errors is also of great interest, since ABI navigation requirements are specified as bounds on the 99.73rd percentile of the magnitudes of per pixel navigation errors. However, the measurement uncertainty inherent in the use of image registration techniques tends to broaden the dispersion in measured local navigation errors, masking the true navigation performance of the ABI system. We have devised a novel and simple method for estimating the magnitude of the measurement uncertainty in registration error for any pair of images of the same earth scene. We use these measurement uncertainty estimates to filter out the higher quality measurements of local navigation error for inclusion in statistics. In so doing, we substantially reduce the dispersion in measured local navigation errors, thereby better approximating the true navigation performance of the ABI system.

  20. Uncertainty quantification based on pillars of experiment, theory, and computation. Part I: Data analysis

    NASA Astrophysics Data System (ADS)

    Elishakoff, I.; Sarlin, N.

    2016-06-01

    In this paper we provide a general methodology of analysis and design of systems involving uncertainties. Available experimental data is enclosed by some geometric figures (triangle, rectangle, ellipse, parallelogram, super ellipse) of minimum area. Then these areas are inflated resorting to the Chebyshev inequality in order to take into account the forecasted data. Next step consists in evaluating response of system when uncertainties are confined to one of the above five suitably inflated geometric figures. This step involves a combined theoretical and computational analysis. We evaluate the maximum response of the system subjected to variation of uncertain parameters in each hypothesized region. The results of triangular, interval, ellipsoidal, parallelogram, and super ellipsoidal calculi are compared with the view of identifying the region that leads to minimum of maximum response. That response is identified as a result of the suggested predictive inference. The methodology thus synthesizes probabilistic notion with each of the five calculi. Using the term "pillar" in the title was inspired by the News Release (2013) on according Honda Prize to J. Tinsley Oden, stating, among others, that "Dr. Oden refers to computational science as the "third pillar" of scientific inquiry, standing beside theoretical and experimental science. Computational science serves as a new paradigm for acquiring knowledge and informing decisions important to humankind". Analysis of systems with uncertainties necessitates employment of all three pillars. The analysis is based on the assumption that that the five shapes are each different conservative estimates of the true bounding region. The smallest of the maximal displacements in x and y directions (for a 2D system) therefore provides the closest estimate of the true displacements based on the above assumption.

  1. Efficacy of robust optimization plan with partial-arc VMAT for photon volumetric-modulated arc therapy: A phantom study.

    PubMed

    Miura, Hideharu; Ozawa, Shuichi; Nagata, Yasushi

    2017-09-01

    This study investigated position dependence in planning target volume (PTV)-based and robust optimization plans using full-arc and partial-arc volumetric modulated arc therapy (VMAT). The gantry angles at the periphery, intermediate, and center CTV positions were 181°-180° (full-arc VMAT) and 181°-360° (partial-arc VMAT). A PTV-based optimization plan was defined by 5 mm margin expansion of the CTV to a PTV volume, on which the dose constraints were applied. The robust optimization plan consisted of a directly optimized dose to the CTV under a maximum-uncertainties setup of 5 mm. The prescription dose was normalized to the CTV D 99% (the minimum relative dose that covers 99% of the volume of the CTV) as an original plan. The isocenter was rigidly shifted at 1 mm intervals in the anterior-posterior (A-P), superior-inferior (S-I), and right-left (R-L) directions from the original position to the maximum-uncertainties setup of 5 mm in the original plan, yielding recalculated dose distributions. It was found that for the intermediate and center positions, the uncertainties in the D 99% doses to the CTV for all directions did not significantly differ when comparing the PTV-based and robust optimization plans (P > 0.05). For the periphery position, uncertainties in the D 99% doses to the CTV in the R-L direction for the robust optimization plan were found to be lower than those in the PTV-based optimization plan (P < 0.05). Our study demonstrated that a robust optimization plan's efficacy using partial-arc VMAT depends on the periphery CTV position. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  2. A Stochastic Method to Develop Nutrient TMDLs Using SWAT

    USDA-ARS?s Scientific Manuscript database

    The U.S. EPA’s Total Maximum Daily Load (TMDL) program has encountered hindrances in its implementation partly because of its strong dependence on mathematical models to set limitations on the release of impairing substances. The uncertainty associated with predictions of such models is often not fo...

  3. Combined-probability space and certainty or uncertainty relations for a finite-level quantum system

    NASA Astrophysics Data System (ADS)

    Sehrawat, Arun

    2017-08-01

    The Born rule provides a probability vector (distribution) with a quantum state for a measurement setting. For two settings, we have a pair of vectors from the same quantum state. Each pair forms a combined-probability vector that obeys certain quantum constraints, which are triangle inequalities in our case. Such a restricted set of combined vectors, called the combined-probability space, is presented here for a d -level quantum system (qudit). The combined space is a compact convex subset of a Euclidean space, and all its extreme points come from a family of parametric curves. Considering a suitable concave function on the combined space to estimate the uncertainty, we deliver an uncertainty relation by finding its global minimum on the curves for a qudit. If one chooses an appropriate concave (or convex) function, then there is no need to search for the absolute minimum (maximum) over the whole space; it will be on the parametric curves. So these curves are quite useful for establishing an uncertainty (or a certainty) relation for a general pair of settings. We also demonstrate that many known tight certainty or uncertainty relations for a qubit can be obtained with the triangle inequalities.

  4. Development of probabilistic emission inventories of air toxics for Jacksonville, Florida, USA.

    PubMed

    Zhao, Yuchao; Frey, H Christopher

    2004-11-01

    Probabilistic emission inventories were developed for 1,3-butadiene, mercury (Hg), arsenic (As), benzene, formaldehyde, and lead for Jacksonville, FL. To quantify inter-unit variability in empirical emission factor data, the Maximum Likelihood Estimation (MLE) method or the Method of Matching Moments was used to fit parametric distributions. For data sets that contain nondetected measurements, a method based upon MLE was used for parameter estimation. To quantify the uncertainty in urban air toxic emission factors, parametric bootstrap simulation and empirical bootstrap simulation were applied to uncensored and censored data, respectively. The probabilistic emission inventories were developed based on the product of the uncertainties in the emission factors and in the activity factors. The uncertainties in the urban air toxics emission inventories range from as small as -25 to +30% for Hg to as large as -83 to +243% for As. The key sources of uncertainty in the emission inventory for each toxic are identified based upon sensitivity analysis. Typically, uncertainty in the inventory of a given pollutant can be attributed primarily to a small number of source categories. Priorities for improving the inventories and for refining the probabilistic analysis are discussed.

  5. Buy now, saved later? The critical impact of time-to-pandemic uncertainty on pandemic cost-effectiveness analyses.

    PubMed

    Drake, Tom; Chalabi, Zaid; Coker, Richard

    2015-02-01

    Investment in pandemic preparedness is a long-term gamble, with the return on investment coming at an unknown point in the future. Many countries have chosen to stockpile key resources, and the number of pandemic economic evaluations has risen sharply since 2009. We assess the importance of uncertainty in time-to-pandemic (and associated discounting) in pandemic economic evaluation, a factor frequently neglected in the literature to-date. We use a probability tree model and Monte Carlo parameter sampling to consider the cost effectiveness of antiviral stockpiling in Cambodia under parameter uncertainty. Mean elasticity and mutual information (MI) are used to assess the importance of time-to-pandemic compared with other parameters. We also consider the sensitivity to choice of sampling distribution used to model time-to-pandemic uncertainty. Time-to-pandemic and discount rate are the primary drivers of sensitivity and uncertainty in pandemic cost effectiveness models. Base case cost effectiveness of antiviral stockpiling ranged between is US$112 and US$3599 per DALY averted using historical pandemic intervals for time-to-pandemic. The mean elasticities for time-to-pandemic and discount rate were greater than all other parameters. Similarly, the MI scores for time to pandemic and discount rate were greater than other parameters. Time-to-pandemic and discount rate were key drivers of uncertainty in cost-effectiveness results regardless of time-to-pandemic sampling distribution choice. Time-to-pandemic assumptions can "substantially" affect cost-effectiveness results and, in our model, is a greater contributor to uncertainty in cost-effectiveness results than any other parameter. We strongly recommend that cost-effectiveness models include probabilistic analysis of time-to-pandemic uncertainty. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2013; all rights reserved.

  6. A new high-precision 40Ar/39Ar age for the Rochechouart impact structure: At least 5 Ma older than the Triassic-Jurassic boundary

    NASA Astrophysics Data System (ADS)

    Cohen, Benjamin E.; Mark, Darren F.; Lee, Martin R.; Simpson, Sarah L.

    2017-08-01

    The Rochechourt impact structure in south-central France, with maximum diameter of 40-50 km, has previously been dated to within 1% uncertainty of the Triassic-Jurassic boundary, at which time 30% of global genera became extinct. To evaluate the temporal relationship between the impact and the Triassic-Jurassic boundary at high precision, we have re-examined the structure's age using multicollector ARGUS-V 40Ar/39Ar mass spectrometry. Results from four aliquots of impact melt are highly reproducible, and yield an age of 206.92 ± 0.20/0.32 Ma (2σ, full analytical/external uncertainties). Thus, the Rochechouart impact structure predates the Triassic-Jurassic boundary by 5.6 ± 0.4 Ma and so is not temporally linked to the mass extinction. Rochechouart has formerly been proposed to be part of a multiple impact event, but when compared with new ages from the other purported "paired" structures, the results provide no evidence for synchronous impacts in the Late Triassic. The widespread Central Atlantic Magmatic Province flood basalts remain the most likely cause of the Triassic-Jurassic mass extinction.

  7. Cost-constrained optimal sampling for system identification in pharmacokinetics applications with population priors and nuisance parameters.

    PubMed

    Sorzano, Carlos Oscars S; Pérez-De-La-Cruz Moreno, Maria Angeles; Burguet-Castell, Jordi; Montejo, Consuelo; Ros, Antonio Aguilar

    2015-06-01

    Pharmacokinetics (PK) applications can be seen as a special case of nonlinear, causal systems with memory. There are cases in which prior knowledge exists about the distribution of the system parameters in a population. However, for a specific patient in a clinical setting, we need to determine her system parameters so that the therapy can be personalized. This system identification is performed many times by measuring drug concentrations in plasma. The objective of this work is to provide an irregular sampling strategy that minimizes the uncertainty about the system parameters with a fixed amount of samples (cost constrained). We use Monte Carlo simulations to estimate the average Fisher's information matrix associated to the PK problem, and then estimate the sampling points that minimize the maximum uncertainty associated to system parameters (a minimax criterion). The minimization is performed employing a genetic algorithm. We show that such a sampling scheme can be designed in a way that is adapted to a particular patient and that it can accommodate any dosing regimen as well as it allows flexible therapeutic strategies. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  8. The Distance to M51

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen. B. W.; Skillman, Evan D.; Dolphin, Andrew E.; Berg, Danielle; Kennicutt, Robert

    2016-07-01

    Great investments of observing time have been dedicated to the study of nearby spiral galaxies with diverse goals ranging from understanding the star formation process to characterizing their dark matter distributions. Accurate distances are fundamental to interpreting observations of these galaxies, yet many of the best studied nearby galaxies have distances based on methods with relatively large uncertainties. We have started a program to derive accurate distances to these galaxies. Here we measure the distance to M51—the Whirlpool galaxy—from newly obtained Hubble Space Telescope optical imaging using the tip of the red giant branch method. We measure the distance modulus to be 8.58 ± 0.10 Mpc (statistical), corresponding to a distance modulus of 29.67 ± 0.02 mag. Our distance is an improvement over previous results as we use a well-calibrated, stable distance indicator, precision photometry in a optimally selected field of view, and a Bayesian Maximum Likelihood technique that reduces measurement uncertainties. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.

  9. Fragility Analysis of Concrete Gravity Dams

    NASA Astrophysics Data System (ADS)

    Tekie, Paulos B.; Ellingwood, Bruce R.

    2002-09-01

    Concrete gravity dams are an important part ofthe nation's infrastructure. Many dams have been in service for over 50 years, during which time important advances in the methodologies for evaluation of natural phenomena hazards have caused the design-basis events to be revised upwards, in some cases significantly. Many existing dams fail to meet these revised safety criteria and structural rehabilitation to meet newly revised criteria may be costly and difficult. A probabilistic safety analysis (PSA) provides a rational safety assessment and decision-making tool managing the various sources of uncertainty that may impact dam performance. Fragility analysis, which depicts fl%e uncertainty in the safety margin above specified hazard levels, is a fundamental tool in a PSA. This study presents a methodology for developing fragilities of concrete gravity dams to assess their performance against hydrologic and seismic hazards. Models of varying degree of complexity and sophistication were considered and compared. The methodology is illustrated using the Bluestone Dam on the New River in West Virginia, which was designed in the late 1930's. The hydrologic fragilities showed that the Eluestone Dam is unlikely to become unstable at the revised probable maximum flood (PMF), but it is likely that there will be significant cracking at the heel ofthe dam. On the other hand, the seismic fragility analysis indicated that sliding is likely, if the dam were to be subjected to a maximum credible earthquake (MCE). Moreover, there will likely be tensile cracking at the neck of the dam at this level of seismic excitation. Probabilities of relatively severe limit states appear to be only marginally affected by extremely rare events (e.g. the PMF and MCE). Moreover, the risks posed by the extreme floods and earthquakes were not balanced for the Bluestone Dam, with seismic hazard posing a relatively higher risk.

  10. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    USGS Publications Warehouse

    Curtis, Gary P.; Lu, Dan; Ye, Ming

    2015-01-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.

  11. Validation of Globsnow-2 Snow Water Equivalent Over Eastern Canada

    NASA Technical Reports Server (NTRS)

    Larue, Fanny; Royer, Alain; De Seve, Danielle; Langlois, Alexandre; Roy, Alexandre R.; Brucker, Ludovic

    2017-01-01

    In Qubec, Eastern Canada, snowmelt runoff contributes more than 30% of the annual energy reserve for hydroelectricity production, and uncertainties in annual maximum snow water equivalent (SWE) over the region are one of the main constraints for improved hydrological forecasting. Current satellite-based methods for mapping SWE over Qubec's main hydropower basins do not meet Hydro-Qubec operational requirements for SWE accuracies with less than 15% error. This paper assesses the accuracy of the GlobSnow-2 (GS-2) SWE product, which combines microwave satellite data and in situ measurements, for hydrological applications in Qubec. GS-2 SWE values for a 30-year period (1980 to 2009) were compared with space- and time-matched values from a comprehensive dataset of in situ SWE measurements (a total of 38,990 observations in Eastern Canada). The root mean square error (RMSE) of the GS-2 SWE product is 94.1+/- 20.3 mm, corresponding to an overall relative percentage error (RPE) of 35.9%. The main sources of uncertainty are wet and deep snow conditions (when SWE is higher than 150 mm), and forest cover type. However, compared to a typical stand-alone brightness temperature channel difference algorithm, the assimilation of surface information in the GS-2 algorithm clearly improves SWE accuracy by reducing the RPE by about 30%. Comparison of trends in annual mean and maximum SWE between surface observations and GS-2 over 1980-2009 showed agreement for increasing trends over southern Qubec, but less agreement on the sign and magnitude of trends over northern Qubec. Extended at a continental scale, the GS-2 SWE trends highlight a strong regional variability.

  12. Soil- and crop-dependent variation in correlation lag between precipitation and agricultural drought indices as predicted by the SWAP model

    NASA Astrophysics Data System (ADS)

    Wright, Azin; Cloke, Hannah; Verhoef, Anne

    2017-04-01

    Droughts have a devastating impact on agriculture and economy. The risk of more frequent and more severe droughts is increasing due to global warming and certain anthropogenic activities. At the same time, the global population continues to rise and the need for sustainable food production is becoming more and more pressing. In light of this, drought prediction can be of great value; in the context of early warning, preparedness and mitigation of drought impacts. Prediction of meteorological drought is associated with uncertainties around precipitation variability. As meteorological drought propagates, it can transform into agricultural drought. Determination of the maximum correlation lag between precipitation and agricultural drought indices can be useful for prediction of agricultural drought. However, the influence of soil and crop type on the lag needs to be considered, which we explored using a 1-D Soil-Vegetation-Atmosphere-Transfer model (SWAP (http://www.swap.alterra.nl/), with the following configurations, all forced with ERA-Interim weather data (1979 to 2014): i) different crop types in the UK; ii) three generic soil types (clay, loam and sand) were considered. A Sobol sensitivity analysis was carried out (perturbing the SWAP model van Genuchten soil hydraulic parameters) to study the effect of soil type uncertainty on the water balance variables. Based on the sensitivity analysis results, a few variations of each soil type were selected. Agricultural drought indices including Soil Moisture Deficit Index (SMDI) and Evapotranspiration Deficit Index (ETDI) were calculated. The maximum correlation lag between precipitation and these drought indices was calculated, and analysed in the context of crop and soil model parameters. The findings of this research can be useful to UK farming, by guiding government bodies such as the Environment Agency when issuing drought warnings and implementing drought measures.

  13. Maximum magnitude in the Lower Rhine Graben

    NASA Astrophysics Data System (ADS)

    Vanneste, Kris; Merino, Miguel; Stein, Seth; Vleminckx, Bart; Brooks, Eddie; Camelbeeck, Thierry

    2014-05-01

    Estimating Mmax, the assumed magnitude of the largest future earthquakes expected on a fault or in an area, involves large uncertainties. No theoretical basis exists to infer Mmax because even where we know the long-term rate of motion across a plate boundary fault, or the deformation rate across an intraplate zone, neither predict how strain will be released. As a result, quite different estimates can be made based on the assumptions used. All one can say with certainty is that Mmax is at least as large as the largest earthquake in the available record. However, because catalogs are often short relative to the average recurrence time of large earthquakes, larger earthquakes than anticipated often occur. Estimating Mmax is especially challenging within plates, where deformation rates are poorly constrained, large earthquakes are rarer and variable in space and time, and often occur on previously unrecognized faults. We explore this issue for the Lower Rhine Graben seismic zone where the largest known earthquake, the 1756 Düren earthquake, has magnitude 5.7 and should occur on average about every 400 years. However, paleoseismic studies suggest that earthquakes with magnitudes up to 6.7 occurred during the Late Pleistocene and Holocene. What to assume for Mmax is crucial for critical facilities like nuclear power plants that should be designed to withstand the maximum shaking in 10,000 years. Using the observed earthquake frequency-magnitude data, we generate synthetic earthquake histories, and sample them over shorter intervals corresponding to the real catalog's completeness. The maximum magnitudes appearing most often in the simulations tend to be those of earthquakes with mean recurrence time equal to the catalog length. Because catalogs are often short relative to the average recurrence time of large earthquakes, we expect larger earthquakes than observed to date to occur. In a next step, we will compute hazard maps for different return periods based on the synthetic catalogs, in order to determine the influence of underestimating Mmax.

  14. Cosmogenic Surface-Exposure Dating of Boulders on Last-Glacial and Late-Glacial Moraines, Lago Buenos Aires, Argentina: Interpretive Strategies and Paleoclimate Implications

    NASA Astrophysics Data System (ADS)

    Douglass, D. C.; Singer, B. S.; Kaplan, M. R.; Mickelson, D. M.; Caffee, M.

    2005-12-01

    The most substantial and least quantifiable source of uncertainty in cosmogenic surface-exposure datasets is the variable exposure histories of boulders from the same landform. The development of precise and accurate chronologies requires distinguishing boulders that best reflect the age of the landform from those which are outliers. We use the Mean Square of Weighted Deviates statistic and cumulative frequency plots to identify groups of samples that have statistically similar ages based on the number of samples and the uncertainty associated with the analyses. This group of samples most likely represents the best estimate of the landform age. We use these tools to interpret 49 surface-exposure ages from six last-glacial and late-glacial moraines at Lago Buenos Aires, Argentina (LBA; 71.0W, 46.5S). Seven of the orty-nine samples are identified as anomalously young, and are interpreted to have been exhumed after moraine deposition. The remaining samples indicate that glacial advances or still-stands of the ice margin occurred at 22.7±0.9, 21.4±1.9, 19.8±1.1, 17.0±0.8, 15.7±0.6, and 14.4±0.9 ka (±2 σ). This maximum ice extent is roughly synchronous with maximum global ice volume and several of the re-advances are contemporaneous with Heinrich events and other Northern Hemisphere cold periods. The late-glacial readvance at ca. 14.4 ka is contemporaneous with the Antarctic Cold Reversal (ACR), and precedes the Younger Dryas Chronozone (YD). No evidence for a Younger Dryas glacial advance has been found in the Lago Buenos Aires basin. This precise glacial chronology indicates there were significant and important differences in climate across southern South America. The timing of maximum ice extent and onset of deglaciation at LBA occur ~4000 years later than in the Chilean Lake District (41S). Fossil pollen from the CLD area indicates cooler conditions between ca. 14.2 and 11.2, and increased silt in a nearby lake core provides indirect evidence for glacial advances at this time. The onset of this late-glacial cool period precedes the YD, but post-dates the ACR. The LBA glacial record is in better accord with the Strait of Magellan (SM; 52S) than with the CLD. There ice reached its maximum around 25 ka, and a significant late-glacial re-advance occurred between ca. 15 and 11.5 ka. Both LBA and the SM have climate records similar to Antarctica, whereas the climate records from the CLD are combinations of Antarctic and Northern Hemisphere signals.

  15. Computation and visualization of uncertainty in surgical navigation.

    PubMed

    Simpson, Amber L; Ma, Burton; Vasarhelyi, Edward M; Borschneck, Dan P; Ellis, Randy E; James Stewart, A

    2014-09-01

    Surgical displays do not show uncertainty information with respect to the position and orientation of instruments. Data is presented as though it were perfect; surgeons unaware of this uncertainty could make critical navigational mistakes. The propagation of uncertainty to the tip of a surgical instrument is described and a novel uncertainty visualization method is proposed. An extensive study with surgeons has examined the effect of uncertainty visualization on surgical performance with pedicle screw insertion, a procedure highly sensitive to uncertain data. It is shown that surgical performance (time to insert screw, degree of breach of pedicle, and rotation error) is not impeded by the additional cognitive burden imposed by uncertainty visualization. Uncertainty can be computed in real time and visualized without adversely affecting surgical performance, and the best method of uncertainty visualization may depend upon the type of navigation display. Copyright © 2013 John Wiley & Sons, Ltd.

  16. SALSA3D: A Tomographic Model of Compressional Wave Slowness in the Earth’s Mantle for Improved Travel-Time Prediction and Travel-Time Prediction Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballard, Sanford; Hipp, James R.; Begnaud, Michael L.

    The task of monitoring the Earth for nuclear explosions relies heavily on seismic data to detect, locate, and characterize suspected nuclear tests. In this study, motivated by the need to locate suspected explosions as accurately and precisely as possible, we developed a tomographic model of the compressional wave slowness in the Earth’s mantle with primary focus on the accuracy and precision of travel-time predictions for P and Pn ray paths through the model. Path-dependent travel-time prediction uncertainties are obtained by computing the full 3D model covariance matrix and then integrating slowness variance and covariance along ray paths from source tomore » receiver. Path-dependent travel-time prediction uncertainties reflect the amount of seismic data that was used in tomography with very low values for paths represented by abundant data in the tomographic data set and very high values for paths through portions of the model that were poorly sampled by the tomography data set. The pattern of travel-time prediction uncertainty is a direct result of the off-diagonal terms of the model covariance matrix and underscores the importance of incorporating the full model covariance matrix in the determination of travel-time prediction uncertainty. In addition, the computed pattern of uncertainty differs significantly from that of 1D distance-dependent travel-time uncertainties computed using traditional methods, which are only appropriate for use with travel times computed through 1D velocity models.« less

  17. SALSA3D: A Tomographic Model of Compressional Wave Slowness in the Earth’s Mantle for Improved Travel-Time Prediction and Travel-Time Prediction Uncertainty

    DOE PAGES

    Ballard, Sanford; Hipp, James R.; Begnaud, Michael L.; ...

    2016-10-11

    The task of monitoring the Earth for nuclear explosions relies heavily on seismic data to detect, locate, and characterize suspected nuclear tests. In this study, motivated by the need to locate suspected explosions as accurately and precisely as possible, we developed a tomographic model of the compressional wave slowness in the Earth’s mantle with primary focus on the accuracy and precision of travel-time predictions for P and Pn ray paths through the model. Path-dependent travel-time prediction uncertainties are obtained by computing the full 3D model covariance matrix and then integrating slowness variance and covariance along ray paths from source tomore » receiver. Path-dependent travel-time prediction uncertainties reflect the amount of seismic data that was used in tomography with very low values for paths represented by abundant data in the tomographic data set and very high values for paths through portions of the model that were poorly sampled by the tomography data set. The pattern of travel-time prediction uncertainty is a direct result of the off-diagonal terms of the model covariance matrix and underscores the importance of incorporating the full model covariance matrix in the determination of travel-time prediction uncertainty. In addition, the computed pattern of uncertainty differs significantly from that of 1D distance-dependent travel-time uncertainties computed using traditional methods, which are only appropriate for use with travel times computed through 1D velocity models.« less

  18. A statistical method for estimating rates of soil development and ages of geologic deposits: A design for soil-chronosequence studies

    USGS Publications Warehouse

    Switzer, P.; Harden, J.W.; Mark, R.K.

    1988-01-01

    A statistical method for estimating rates of soil development in a given region based on calibration from a series of dated soils is used to estimate ages of soils in the same region that are not dated directly. The method is designed specifically to account for sampling procedures and uncertainties that are inherent in soil studies. Soil variation and measurement error, uncertainties in calibration dates and their relation to the age of the soil, and the limited number of dated soils are all considered. Maximum likelihood (ML) is employed to estimate a parametric linear calibration curve, relating soil development to time or age on suitably transformed scales. Soil variation on a geomorphic surface of a certain age is characterized by replicate sampling of soils on each surface; such variation is assumed to have a Gaussian distribution. The age of a geomorphic surface is described by older and younger bounds. This technique allows age uncertainty to be characterized by either a Gaussian distribution or by a triangular distribution using minimum, best-estimate, and maximum ages. The calibration curve is taken to be linear after suitable (in certain cases logarithmic) transformations, if required, of the soil parameter and age variables. Soil variability, measurement error, and departures from linearity are described in a combined fashion using Gaussian distributions with variances particular to each sampled geomorphic surface and the number of sample replicates. Uncertainty in age of a geomorphic surface used for calibration is described using three parameters by one of two methods. In the first method, upper and lower ages are specified together with a coverage probability; this specification is converted to a Gaussian distribution with the appropriate mean and variance. In the second method, "absolute" older and younger ages are specified together with a most probable age; this specification is converted to an asymmetric triangular distribution with mode at the most probable age. The statistical variability of the ML-estimated calibration curve is assessed by a Monte Carlo method in which simulated data sets repeatedly are drawn from the distributional specification; calibration parameters are reestimated for each such simulation in order to assess their statistical variability. Several examples are used for illustration. The age of undated soils in a related setting may be estimated from the soil data using the fitted calibration curve. A second simulation to assess age estimate variability is described and applied to the examples. ?? 1988 International Association for Mathematical Geology.

  19. Uncertainty in temperature response of current consumption-based emissions estimates

    NASA Astrophysics Data System (ADS)

    Karstensen, J.; Peters, G. P.; Andrew, R. M.

    2014-09-01

    Several studies have connected emissions of greenhouse gases to economic and trade data to quantify the causal chain from consumption to emissions and climate change. These studies usually combine data and models originating from different sources, making it difficult to estimate uncertainties in the end results. We estimate uncertainties in economic data, multi-pollutant emission statistics and metric parameters, and use Monte Carlo analysis to quantify contributions to uncertainty and to determine how uncertainty propagates to estimates of global temperature change from regional and sectoral territorial- and consumption-based emissions for the year 2007. We find that the uncertainties are sensitive to the emission allocations, mix of pollutants included, the metric and its time horizon, and the level of aggregation of the results. Uncertainties in the final results are largely dominated by the climate sensitivity and the parameters associated with the warming effects of CO2. The economic data have a relatively small impact on uncertainty at the global and national level, while much higher uncertainties are found at the sectoral level. Our results suggest that consumption-based national emissions are not significantly more uncertain than the corresponding production based emissions, since the largest uncertainties are due to metric and emissions which affect both perspectives equally. The two perspectives exhibit different sectoral uncertainties, due to changes of pollutant compositions. We find global sectoral consumption uncertainties in the range of ±9-±27% using the global temperature potential with a 50 year time horizon, with metric uncertainties dominating. National level uncertainties are similar in both perspectives due to the dominance of CO2 over other pollutants. The consumption emissions of the top 10 emitting regions have a broad uncertainty range of ±9-±25%, with metric and emissions uncertainties contributing similarly. The Absolute global temperature potential with a 50 year time horizon has much higher uncertainties, with considerable uncertainty overlap for regions and sectors, indicating that the ranking of countries is uncertain.

  20. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  1. Consideration of vertical uncertainty in elevation-based sea-level rise assessments: Mobile Bay, Alabama case study

    USGS Publications Warehouse

    Gesch, Dean B.

    2013-01-01

    The accuracy with which coastal topography has been mapped directly affects the reliability and usefulness of elevationbased sea-level rise vulnerability assessments. Recent research has shown that the qualities of the elevation data must be well understood to properly model potential impacts. The cumulative vertical uncertainty has contributions from elevation data error, water level data uncertainties, and vertical datum and transformation uncertainties. The concepts of minimum sealevel rise increment and minimum planning timeline, important parameters for an elevation-based sea-level rise assessment, are used in recognition of the inherent vertical uncertainty of the underlying data. These concepts were applied to conduct a sea-level rise vulnerability assessment of the Mobile Bay, Alabama, region based on high-quality lidar-derived elevation data. The results that detail the area and associated resources (land cover, population, and infrastructure) vulnerable to a 1.18-m sea-level rise by the year 2100 are reported as a range of values (at the 95% confidence level) to account for the vertical uncertainty in the base data. Examination of the tabulated statistics about land cover, population, and infrastructure in the minimum and maximum vulnerable areas shows that these resources are not uniformly distributed throughout the overall vulnerable zone. The methods demonstrated in the Mobile Bay analysis provide an example of how to consider and properly account for vertical uncertainty in elevation-based sea-level rise vulnerability assessments, and the advantages of doing so.

  2. Interval-based reconstruction for uncertainty quantification in PET

    NASA Astrophysics Data System (ADS)

    Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis

    2018-02-01

    A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.

  3. Uncertainty relation for non-Hamiltonian quantum systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarasov, Vasily E.

    2013-01-15

    General forms of uncertainty relations for quantum observables of non-Hamiltonian quantum systems are considered. Special cases of uncertainty relations are discussed. The uncertainty relations for non-Hamiltonian quantum systems are considered in the Schroedinger-Robertson form since it allows us to take into account Lie-Jordan algebra of quantum observables. In uncertainty relations, the time dependence of quantum observables and the properties of this dependence are discussed. We take into account that a time evolution of observables of a non-Hamiltonian quantum system is not an endomorphism with respect to Lie, Jordan, and associative multiplications.

  4. Parameter uncertainty and nonstationarity in regional extreme rainfall frequency analysis in Qu River Basin, East China

    NASA Astrophysics Data System (ADS)

    Zhu, Q.; Xu, Y. P.; Gu, H.

    2014-12-01

    Traditionally, regional frequency analysis methods were developed for stationary environmental conditions. Nevertheless, recent studies have identified significant changes in hydrological records, leading to the 'death' of stationarity. Besides, uncertainty in hydrological frequency analysis is persistent. This study aims to investigate the impact of one of the most important uncertainty sources, parameter uncertainty, together with nonstationarity, on design rainfall depth in Qu River Basin, East China. A spatial bootstrap is first proposed to analyze the uncertainty of design rainfall depth estimated by regional frequency analysis based on L-moments and estimated on at-site scale. Meanwhile, a method combining the generalized additive models with 30-year moving window is employed to analyze non-stationarity existed in the extreme rainfall regime. The results show that the uncertainties of design rainfall depth with 100-year return period under stationary conditions estimated by regional spatial bootstrap can reach 15.07% and 12.22% with GEV and PE3 respectively. On at-site scale, the uncertainties can reach 17.18% and 15.44% with GEV and PE3 respectively. In non-stationary conditions, the uncertainties of maximum rainfall depth (corresponding to design rainfall depth) with 0.01 annual exceedance probability (corresponding to 100-year return period) are 23.09% and 13.83% with GEV and PE3 respectively. Comparing the 90% confidence interval, the uncertainty of design rainfall depth resulted from parameter uncertainty is less than that from non-stationarity frequency analysis with GEV, however, slightly larger with PE3. This study indicates that the spatial bootstrap can be successfully applied to analyze the uncertainty of design rainfall depth on both regional and at-site scales. And the non-stationary analysis shows that the differences between non-stationary quantiles and their stationary equivalents are important for decision makes of water resources management and risk management.

  5. Orientation Uncertainty of Structures Measured in Cored Boreholes: Methodology and Case Study of Swedish Crystalline Rock

    NASA Astrophysics Data System (ADS)

    Stigsson, Martin

    2016-11-01

    Many engineering applications in fractured crystalline rocks use measured orientations of structures such as rock contact and fractures, and lineated objects such as foliation and rock stress, mapped in boreholes as their foundation. Despite that these measurements are afflicted with uncertainties, very few attempts to quantify their magnitudes and effects on the inferred orientations have been reported. Only relying on the specification of tool imprecision may considerably underestimate the actual uncertainty space. The present work identifies nine sources of uncertainties, develops inference models of their magnitudes, and points out possible implications for the inference on orientation models and thereby effects on downstream models. The uncertainty analysis in this work builds on a unique data set from site investigations, performed by the Swedish Nuclear Fuel and Waste Management Co. (SKB). During these investigations, more than 70 boreholes with a maximum depth of 1 km were drilled in crystalline rock with a cumulative length of more than 34 km including almost 200,000 single fracture intercepts. The work presented, hence, relies on orientation of fractures. However, the techniques to infer the magnitude of orientation uncertainty may be applied to all types of structures and lineated objects in boreholes. The uncertainties are not solely detrimental, but can be valuable, provided that the reason for their presence is properly understood and the magnitudes correctly inferred. The main findings of this work are as follows: (1) knowledge of the orientation uncertainty is crucial in order to be able to infer correct orientation model and parameters coupled to the fracture sets; (2) it is important to perform multiple measurements to be able to infer the actual uncertainty instead of relying on the theoretical uncertainty provided by the manufacturers; (3) it is important to use the most appropriate tool for the prevailing circumstances; and (4) the single most important parameter to decrease the uncertainty space is to avoid drilling steeper than about -80°.

  6. Evaluation of trace analyte identification in complex matrices by low-resolution gas chromatography--Mass spectrometry through signal simulation.

    PubMed

    Bettencourt da Silva, Ricardo J N

    2016-04-01

    The identification of trace levels of compounds in complex matrices by conventional low-resolution gas chromatography hyphenated with mass spectrometry is based in the comparison of retention times and abundance ratios of characteristic mass spectrum fragments of analyte peaks from calibrators with sample peaks. Statistically sound criteria for the comparison of these parameters were developed based on the normal distribution of retention times and the simulation of possible non-normal distribution of correlated abundances ratios. The confidence level used to set the statistical maximum and minimum limits of parameters defines the true positive rates of identifications. The false positive rate of identification was estimated from worst-case signal noise models. The estimated true and false positive identifications rate from one retention time and two correlated ratios of three fragments abundances were combined using simple Bayes' statistics to estimate the probability of compound identification being correct designated examination uncertainty. Models of the variation of examination uncertainty with analyte quantity allowed the estimation of the Limit of Examination as the lowest quantity that produced "Extremely strong" evidences of compound presence. User friendly MS-Excel files are made available to allow the easy application of developed approach in routine and research laboratories. The developed approach was successfully applied to the identification of chlorpyrifos-methyl and malathion in QuEChERS method extracts of vegetables with high water content for which the estimated Limit of Examination is 0.14 mg kg(-1) and 0.23 mg kg(-1) respectively. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Consequences of Secondary Calibrations on Divergence Time Estimates.

    PubMed

    Schenk, John J

    2016-01-01

    Secondary calibrations (calibrations based on the results of previous molecular dating studies) are commonly applied in divergence time analyses in groups that lack fossil data; however, the consequences of applying secondary calibrations in a relaxed-clock approach are not fully understood. I tested whether applying the posterior estimate from a primary study as a prior distribution in a secondary study results in consistent age and uncertainty estimates. I compared age estimates from simulations with 100 randomly replicated secondary trees. On average, the 95% credible intervals of node ages for secondary estimates were significantly younger and narrower than primary estimates. The primary and secondary age estimates were significantly different in 97% of the replicates after Bonferroni corrections. Greater error in magnitude was associated with deeper than shallower nodes, but the opposite was found when standardized by median node age, and a significant positive relationship was determined between the number of tips/age of secondary trees and the total amount of error. When two secondary calibrated nodes were analyzed, estimates remained significantly different, and although the minimum and median estimates were associated with less error, maximum age estimates and credible interval widths had greater error. The shape of the prior also influenced error, in which applying a normal, rather than uniform, prior distribution resulted in greater error. Secondary calibrations, in summary, lead to a false impression of precision and the distribution of age estimates shift away from those that would be inferred by the primary analysis. These results suggest that secondary calibrations should not be applied as the only source of calibration in divergence time analyses that test time-dependent hypotheses until the additional error associated with secondary calibrations is more properly modeled to take into account increased uncertainty in age estimates.

  8. Intelligent and robust optimization frameworks for smart grids

    NASA Astrophysics Data System (ADS)

    Dhansri, Naren Reddy

    A smart grid implies a cyberspace real-time distributed power control system to optimally deliver electricity based on varying consumer characteristics. Although smart grids solve many of the contemporary problems, they give rise to new control and optimization problems with the growing role of renewable energy sources such as wind or solar energy. Under highly dynamic nature of distributed power generation and the varying consumer demand and cost requirements, the total power output of the grid should be controlled such that the load demand is met by giving a higher priority to renewable energy sources. Hence, the power generated from renewable energy sources should be optimized while minimizing the generation from non renewable energy sources. This research develops a demand-based automatic generation control and optimization framework for real-time smart grid operations by integrating conventional and renewable energy sources under varying consumer demand and cost requirements. Focusing on the renewable energy sources, the intelligent and robust control frameworks optimize the power generation by tracking the consumer demand in a closed-loop control framework, yielding superior economic and ecological benefits and circumvent nonlinear model complexities and handles uncertainties for superior real-time operations. The proposed intelligent system framework optimizes the smart grid power generation for maximum economical and ecological benefits under an uncertain renewable wind energy source. The numerical results demonstrate that the proposed framework is a viable approach to integrate various energy sources for real-time smart grid implementations. The robust optimization framework results demonstrate the effectiveness of the robust controllers under bounded power plant model uncertainties and exogenous wind input excitation while maximizing economical and ecological performance objectives. Therefore, the proposed framework offers a new worst-case deterministic optimization algorithm for smart grid automatic generation control.

  9. Chapter two: Phenomenology of tsunamis II: scaling, event statistics, and inter-event triggering

    USGS Publications Warehouse

    Geist, Eric L.

    2012-01-01

    Observations related to tsunami catalogs are reviewed and described in a phenomenological framework. An examination of scaling relationships between earthquake size (as expressed by scalar seismic moment and mean slip) and tsunami size (as expressed by mean and maximum local run-up and maximum far-field amplitude) indicates that scaling is significant at the 95% confidence level, although there is uncertainty in how well earthquake size can predict tsunami size (R2 ~ 0.4-0.6). In examining tsunami event statistics, current methods used to estimate the size distribution of earthquakes and landslides and the inter-event time distribution of earthquakes are first reviewed. These methods are adapted to estimate the size and inter-event distribution of tsunamis at a particular recording station. Using a modified Pareto size distribution, the best-fit power-law exponents of tsunamis recorded at nine Pacific tide-gauge stations exhibit marked variation, in contrast to the approximately constant power-law exponent for inter-plate thrust earthquakes. With regard to the inter-event time distribution, significant temporal clustering of tsunami sources is demonstrated. For tsunami sources occurring in close proximity to other sources in both space and time, a physical triggering mechanism, such as static stress transfer, is a likely cause for the anomalous clustering. Mechanisms of earthquake-to-earthquake and earthquake-to-landslide triggering are reviewed. Finally, a modification of statistical branching models developed for earthquake triggering is introduced to describe triggering among tsunami sources.

  10. Experimental and numerical investigations of air plasmas induced by multi-MeV pulsed X-ray from low to atmospheric pressures

    NASA Astrophysics Data System (ADS)

    Maulois, Mélissa; Ribière, Maxime; Eichwald, Olivier; Yousfi, Mohammed; Pouzalgues, Romain; Garrigues, Alain; Delbos, Christophe; Azaïs, Bruno

    2016-09-01

    This research work is devoted to the experimental and theoretical analysis of air plasmas induced by multi-MeV pulsed X-ray for a large pressure range of humid air background gas varying from 20 mbar to atmospheric pressure. The time evolution of the electron density of the air plasma is determined by electromagnetic wave absorption measurements. The measurements have uncertainties of about ±30%, taking into account the precision of the dose measurement and also the shot to shot fluctuations of the generator. The experimental electron density is obtained by comparing the measurements of the transmitted microwave signals to the calculated ones. The calculations need the knowledge of the time evolution of the electron mean energy, which is determined by a chemical kinetic model based on a reaction scheme involving 39 species interacting following 265 reactions. During the X-ray pulse, a good agreement is obtained between time evolution of the electron density obtained from absorption measurements and calculations based on the kinetic model. The relative deviation on the maximum electron density and the corresponding plasma frequency is always lower than 10%. The maximum electron density varies from 4 × 1011 to 3.5 × 1013 cm-3 between 30 mbar to atmospheric pressure, while the peak of the electron mean energy decreases from 5.64 eV to 4.27 eV in the same pressure range.

  11. Incorporating uncertainty in watershed management decision-making: A mercury TMDL case study

    USGS Publications Warehouse

    Labiosa, W.; Leckie, J.; Shachter, R.; Freyberg, D.; Rytuba, J.; ,

    2005-01-01

    Water quality impairment due to high mercury fish tissue concentrations and high mercury aqueous concentrations is a widespread problem in several sub-watersheds that are major sources of mercury to the San Francisco Bay. Several mercury Total Maximum Daily Load regulations are currently being developed to address this problem. Decisions about control strategies are being made despite very large uncertainties about current mercury loading behavior, relationships between total mercury loading and methyl mercury formation, and relationships between potential controls and mercury fish tissue levels. To deal with the issues of very large uncertainties, data limitations, knowledge gaps, and very limited State agency resources, this work proposes a decision analytical alternative for mercury TMDL decision support. The proposed probabilistic decision model is Bayesian in nature and is fully compatible with a "learning while doing" adaptive management approach. Strategy evaluation, sensitivity analysis, and information collection prioritization are examples of analyses that can be performed using this approach.

  12. A single-loop optimization method for reliability analysis with second order uncertainty

    NASA Astrophysics Data System (ADS)

    Xie, Shaojun; Pan, Baisong; Du, Xiaoping

    2015-08-01

    Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.

  13. Quantifying the uncertainty in heritability

    PubMed Central

    Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

    2014-01-01

    The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large. PMID:24670270

  14. Uncertainty in aerosol hygroscopicity resulting from semi-volatile organic compounds

    NASA Astrophysics Data System (ADS)

    Goulden, Olivia; Crooks, Matthew; Connolly, Paul

    2018-01-01

    We present a novel method of exploring the effect of uncertainties in aerosol properties on cloud droplet number using existing cloud droplet activation parameterisations. Aerosol properties of a single involatile particle mode are randomly sampled within an uncertainty range and resulting maximum supersaturations and critical diameters calculated using the cloud droplet activation scheme. Hygroscopicity parameters are subsequently derived and the values of the mean and uncertainty are found to be comparable to experimental observations. A recently proposed cloud droplet activation scheme that includes the effects of co-condensation of semi-volatile organic compounds (SVOCs) onto a single lognormal mode of involatile particles is also considered. In addition to the uncertainties associated with the involatile particles, concentrations, volatility distributions and chemical composition of the SVOCs are randomly sampled and hygroscopicity parameters are derived using the cloud droplet activation scheme. The inclusion of SVOCs is found to have a significant effect on the hygroscopicity and contributes a large uncertainty. For non-volatile particles that are effective cloud condensation nuclei, the co-condensation of SVOCs reduces their actual hygroscopicity by approximately 25 %. A new concept of an effective hygroscopicity parameter is introduced that can computationally efficiently simulate the effect of SVOCs on cloud droplet number concentration without direct modelling of the organic compounds. These effective hygroscopicities can be as much as a factor of 2 higher than those of the non-volatile particles onto which the volatile organic compounds condense.

  15. Model averaging techniques for quantifying conceptual model uncertainty.

    PubMed

    Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg

    2010-01-01

    In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.

  16. SU-E-T-118: Dose Verification for Accuboost Applicators Using TLD, Ion Chamber and Gafchromic Film Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chisela, W; Yao, R; Dorbu, G

    Purpose: To verify dose delivered with HDR Accuboost applicators using TLD, ion chamber and Gafchromic film measurements and to examine applicator leakage. Methods: A microSelectron HDR unit was used to deliver a dose of 50cGy to the mid-plane of a 62mm thick solid water phantom using dwell times from Monte Carlo pre-calculated nomograms for a 60mm, 70mm Round and 60mm Skin-Dose Optimized (SDO) applicators respectively. GafChromic EBT3+ film was embedded in the phantom midplane horizontally to measure dose distribution. Absolute dose was also measured with TLDs and an ADCL calibrated parallel-plate ion chamber placed in the film plane at fieldmore » center for each applicator. The film was calibrated using 6MV x-ray beam. TLDs were calibrated in a Cs-137 source at UW-Madison calibration laboratory. Radiation leakage through the tungsten alloy shell was measured with a film wrapped around outside surface of a 60mm Round applicator. Results: Measured maximum doses at field center are consistently lower than predicated by 5.8% for TLD, 8.8% for ion chamber, and 2.6% for EBT3+ film on average, with measurement uncertainties of 2.2%, 0.3%, and 2.9% for TLD, chamber, film respectively. The total standard uncertainties for ion chamber and Gafchromic film measurement are 4.9% and 4.6% respectively[1]. The area defined by the applicator aperture was covered by 80% of maximum dose for 62mm compression thickness. When 100cGy is delivered to mid-plane with a 60mm Round applicator, surface dose ranges from 60cGy to a maximum of 145cGy, which occurs at source entrance to the applicator. Conclusion: Measured doses by all three techniques are consistently lower than predicted in our measurements. For a compression thickness of 62 mm, the field size defined by the applicator is only covered by 80% of prescribed dose. Radiation leakage of up to 145cGy was found at the source entrance of applicators.« less

  17. SU-F-T-186: A Treatment Planning Study of Normal Tissue Sparing with Robustness Optimized IMPT, 4Pi IMRT, and VMAT for Head and Neck Cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J; Li, X; Ding, X

    Purpose: We performed a retrospective dosimetric comparison study between the robustness optimized Intensity Modulated Proton Therapy (RO-IMPT), volumetric-modulated arc therapy (VMAT), and the non-coplanar 4? intensity modulated radiation therapy (IMRT). These methods represent the most advanced radiation treatment methods clinically available. We compare their dosimetric performance for head and neck cancer treatments with special focus on the OAR sparing near the tumor volumes. Methods: A total of 11 head and neck cases, which include 10 recurrent cases and one bilateral case, were selected for the study. Different dose levels were prescribed to tumor target depending on disease and location. Threemore » treatment plans were created on commercial TPS systems for a novel noncoplanar 4π method (20 beams), VMAT, and RO-IMPT technique (maximum 4 fields). The maximum patient positioning error was set to 3 mm and the maximum proton range uncertainty was set to 3% for the robustness optimization. Line dose profiles were investigated for OARs close to tumor volumes. Results: All three techniques achieved 98% coverage of the CTV target and most photon plans had less than 110% of the hot spots. The RO-IMPT plans show superior tumor dose homogeneity than 4? and VMAT plans. Although RO-IMPT has greater R50 dose spillage to the surrounding normal tissue than 4π and VMAT, the RO-IMPT plans demonstrate better or comparable OAR (parotid, mandible, carotid, oral cavity, pharynx, and etc.) sparing for structures closely abutting tumor targets. Conclusion: The RO-IMPT’s ability of OAR sparing is benchmarked against the C-arm linac based non-coplanar 4π technique and the standard VMAT method. RO-IMPT consistently shows better or comparable OAR sparing even for tissue structures closely abutting treatment target volume. RO-IMPT further reduces treatment uncertainty associated with proton therapy and delivers robust treatment plans to both unilateral and bilateral head and neck cancer patients with desirable treatment time.« less

  18. Probabilistic tsunami hazard assessment at Seaside, Oregon, for near-and far-field seismic sources

    USGS Publications Warehouse

    Gonzalez, F.I.; Geist, E.L.; Jaffe, B.; Kanoglu, U.; Mofjeld, H.; Synolakis, C.E.; Titov, V.V.; Areas, D.; Bellomo, D.; Carlton, D.; Horning, T.; Johnson, J.; Newman, J.; Parsons, T.; Peters, R.; Peterson, C.; Priest, G.; Venturato, A.; Weber, J.; Wong, F.; Yalciner, A.

    2009-01-01

    The first probabilistic tsunami flooding maps have been developed. The methodology, called probabilistic tsunami hazard assessment (PTHA), integrates tsunami inundation modeling with methods of probabilistic seismic hazard assessment (PSHA). Application of the methodology to Seaside, Oregon, has yielded estimates of the spatial distribution of 100- and 500-year maximum tsunami amplitudes, i.e., amplitudes with 1% and 0.2% annual probability of exceedance. The 100-year tsunami is generated most frequently by far-field sources in the Alaska-Aleutian Subduction Zone and is characterized by maximum amplitudes that do not exceed 4 m, with an inland extent of less than 500 m. In contrast, the 500-year tsunami is dominated by local sources in the Cascadia Subduction Zone and is characterized by maximum amplitudes in excess of 10 m and an inland extent of more than 1 km. The primary sources of uncertainty in these results include those associated with interevent time estimates, modeling of background sea level, and accounting for temporal changes in bathymetry and topography. Nonetheless, PTHA represents an important contribution to tsunami hazard assessment techniques; viewed in the broader context of risk analysis, PTHA provides a method for quantifying estimates of the likelihood and severity of the tsunami hazard, which can then be combined with vulnerability and exposure to yield estimates of tsunami risk. Copyright 2009 by the American Geophysical Union.

  19. Weighing conservation objectives: maximum expected coverage versus endangered species protection

    Treesearch

    Jeffrey L. Arthur; Jeffrey D. Camm; Robert G. Haight; Claire A. Montgomery; Stephen Polasky

    2004-01-01

    Decision makers involved in land acquisition and protection often have multiple conservation objectives and are uncertain about the occurrence of species or other features in candidate sites. Model informing decisions on selection of sites for reserves need to provide information about cost-efficient trade-offs between objectives and account for incidence uncertainty...

  20. Uncertainty in temperature response of current consumption-based emissions estimates

    NASA Astrophysics Data System (ADS)

    Karstensen, J.; Peters, G. P.; Andrew, R. M.

    2015-05-01

    Several studies have connected emissions of greenhouse gases to economic and trade data to quantify the causal chain from consumption to emissions and climate change. These studies usually combine data and models originating from different sources, making it difficult to estimate uncertainties along the entire causal chain. We estimate uncertainties in economic data, multi-pollutant emission statistics, and metric parameters, and use Monte Carlo analysis to quantify contributions to uncertainty and to determine how uncertainty propagates to estimates of global temperature change from regional and sectoral territorial- and consumption-based emissions for the year 2007. We find that the uncertainties are sensitive to the emission allocations, mix of pollutants included, the metric and its time horizon, and the level of aggregation of the results. Uncertainties in the final results are largely dominated by the climate sensitivity and the parameters associated with the warming effects of CO2. Based on our assumptions, which exclude correlations in the economic data, the uncertainty in the economic data appears to have a relatively small impact on uncertainty at the national level in comparison to emissions and metric uncertainty. Much higher uncertainties are found at the sectoral level. Our results suggest that consumption-based national emissions are not significantly more uncertain than the corresponding production-based emissions since the largest uncertainties are due to metric and emissions which affect both perspectives equally. The two perspectives exhibit different sectoral uncertainties, due to changes of pollutant compositions. We find global sectoral consumption uncertainties in the range of ±10 to ±27 % using the Global Temperature Potential with a 50-year time horizon, with metric uncertainties dominating. National-level uncertainties are similar in both perspectives due to the dominance of CO2 over other pollutants. The consumption emissions of the top 10 emitting regions have a broad uncertainty range of ±9 to ±25 %, with metric and emission uncertainties contributing similarly. The absolute global temperature potential (AGTP) with a 50-year time horizon has much higher uncertainties, with considerable uncertainty overlap for regions and sectors, indicating that the ranking of countries is uncertain.

  1. Impact of Martian atmosphere parameter uncertainties on entry vehicles aerodynamic for hypersonic rarefied conditions

    NASA Astrophysics Data System (ADS)

    Fei, Huang; Xu-hong, Jin; Jun-ming, Lv; Xiao-li, Cheng

    2016-11-01

    An attempt has been made to analyze impact of Martian atmosphere parameter uncertainties on entry vehicle aerodynamics for hypersonic rarefied conditions with a DSMC code. The code has been validated by comparing Viking vehicle flight data with present computational results. Then, by simulating flows around the Mars Science Laboratory, the impact of errors of free stream parameter uncertainties on aerodynamics is investigated. The validation results show that the present numerical approach can show good agreement with the Viking flight data. The physical and chemical properties of CO2 has strong impact on aerodynamics of Mars entry vehicles, so it is necessary to make proper corrections to the data obtained with air model in hypersonic rarefied conditions, which is consistent with the conclusions drawn in continuum regime. Uncertainties of free stream density and velocity weakly influence aerodynamics and pitching moment. However, aerodynamics appears to be little influenced by free stream temperature, the maximum error of what is below 0.5%. Center of pressure position is not sensitive to free stream parameters.

  2. The Price of Uncertainty in Security Games

    NASA Astrophysics Data System (ADS)

    Grossklags, Jens; Johnson, Benjamin; Christin, Nicolas

    In the realm of information security, lack of information about other users' incentives in a network can lead to inefficient security choices and reductions in individuals' payoffs. We propose, contrast and compare three metrics for measuring the price of uncertainty due to the departure from the payoff-optimal security outcomes under complete information. Per the analogy with other efficiency metrics, such as the price of anarchy, we define the price of uncertainty as the maximum discrepancy in expected payoff in a complete information environment versus the payoff in an incomplete information environment. We consider difference, payoffratio, and cost-ratio metrics as canonical nontrivial measurements of the price of uncertainty. We conduct an algebraic, numerical, and graphical analysis of these metrics applied to different well-studied security scenarios proposed in prior work (i.e., best shot, weakest-link, and total effort). In these scenarios, we study how a fully rational expert agent could utilize the metrics to decide whether to gather information about the economic incentives of multiple nearsighted and naïve agents. We find substantial differences between the various metrics and evaluate the appropriateness for security choices in networked systems.

  3. Analyzing climate change impacts on water resources under uncertainty using an integrated simulation-optimization approach

    NASA Astrophysics Data System (ADS)

    Zhuang, X. W.; Li, Y. P.; Nie, S.; Fan, Y. R.; Huang, G. H.

    2018-01-01

    An integrated simulation-optimization (ISO) approach is developed for assessing climate change impacts on water resources. In the ISO, uncertainties presented as both interval numbers and probability distributions can be reflected. Moreover, ISO permits in-depth analyses of various policy scenarios that are associated with different levels of economic consequences when the promised water-allocation targets are violated. A snowmelt-precipitation-driven watershed (Kaidu watershed) in northwest China is selected as the study case for demonstrating the applicability of the proposed method. Results of meteorological projections disclose that the incremental trend of temperature (e.g., minimum and maximum values) and precipitation exist. Results also reveal that (i) the system uncertainties would significantly affect water resources allocation pattern (including target and shortage); (ii) water shortage would be enhanced from 2016 to 2070; and (iii) the more the inflow amount decreases, the higher estimated water shortage rates are. The ISO method is useful for evaluating climate change impacts within a watershed system with complicated uncertainties and helping identify appropriate water resources management strategies hedging against drought.

  4. Benefit-cost estimation for alternative drinking water maximum contaminant levels

    NASA Astrophysics Data System (ADS)

    Gurian, Patrick L.; Small, Mitchell J.; Lockwood, John R.; Schervish, Mark J.

    2001-08-01

    A simulation model for estimating compliance behavior and resulting costs at U.S. Community Water Suppliers is developed and applied to the evaluation of a more stringent maximum contaminant level (MCL) for arsenic. Probability distributions of source water arsenic concentrations are simulated using a statistical model conditioned on system location (state) and source water type (surface water or groundwater). This model is fit to two recent national surveys of source waters, then applied with the model explanatory variables for the population of U.S. Community Water Suppliers. Existing treatment types and arsenic removal efficiencies are also simulated. Utilities with finished water arsenic concentrations above the proposed MCL are assumed to select the least cost option compatible with their existing treatment from among 21 available compliance strategies and processes for meeting the standard. Estimated costs and arsenic exposure reductions at individual suppliers are aggregated to estimate the national compliance cost, arsenic exposure reduction, and resulting bladder cancer risk reduction. Uncertainties in the estimates are characterized based on uncertainties in the occurrence model parameters, existing treatment types, treatment removal efficiencies, costs, and the bladder cancer dose-response function for arsenic.

  5. A database of biological and geomorphological sea-level markers from the Last Glacial Maximum to present

    PubMed Central

    Hibbert, F.D.; Williams, F.H.; Fallon, S.J.; Rohling, E.J.

    2018-01-01

    The last deglacial was an interval of rapid climate and sea-level change, including the collapse of large continental ice sheets. This database collates carefully assessed sea-level data from peer-reviewed sources for the interval 0 to 25 thousand years ago (ka), from the Last Glacial Maximum to the present interglacial. In addition to facilitating site-specific reconstructions of past sea levels, the database provides a suite of data beyond the range of modern/instrumental variability that may help hone future sea-level projections. The database is global in scope, internally consistent, and contains U-series and radiocarbon dated indicators from both biological and geomorpohological archives. We focus on far-field data (i.e., away from the sites of the former continental ice sheets), but some key intermediate (i.e., from the Caribbean) data are also included. All primary fields (i.e., sample location, elevation, age and context) possess quantified uncertainties, which—in conjunction with available metadata—allows the reconstructed sea levels to be interpreted within both their uncertainties and geological context. PMID:29809175

  6. Gridded Uncertainty Maps of Fossil Fuel Carbon Dioxide Emissions: A New Data Product

    NASA Astrophysics Data System (ADS)

    Andres, R. J.; Boden, T.

    2014-12-01

    With the publication of a new assessment of the uncertainty associated with the mass of fossil fuel carbon dioxide (FFCO2) emissions (2014, Tellus B, 66, 23616, doi:10.3402/tellusb.v66.23616), it is now possible to extend that work with a gridded map of fossil fuel emission uncertainties. The new data product was created to be paired with the long-used, Carbon Dioxide Information Analysis Center (CDIAC), emission year 1751-present, one degree latitude by one degree longitude (1x1) mass of emissions data product (http://cdiac.ornl.gov/epubs/ndp/ndp058/ndp058_v2013.html). Now, for the first time, data users will have FFCO2 emission information that represents both mass and uncertainty, each of which varies in both time and space. The new data product was constructed by examining the individual uncertainties in each of the input data sets to the gridded mass maps and then combining these individual uncertainties into an overall uncertainty for the mass maps. The input data sets include a table of the mass of FFCO2 emissions by country and year, the one degree geographic map of emissions which includes changing borders on an annual time scale and ties the mass of emissions to location, and the one degree population proxy used to distribute the mass of emissions within each country. As the three input data sets are independent of each other, their combination for the overall uncertainty is accomplished by a simple square root of the sum of the squares procedure. The resulting uncertainty data product is gridded at 1x1 and exactly overlays the 1x1 mass emission maps. The default temporal resolution is annual, but a companion product is also available at monthly time scales. The monthly uncertainty product uses the same input data sets, but the mass uncertainty is scaled as described in the monthly mass product description paper (2011, Tellus B, 63:309-327, doi: 10.1111/j.1600-0889.2011.00530.x). The gridded uncertainty maps cover emission year 1950 to 2010. The start year is determined by the mass uncertainty study which began its analysis in 1950. The end year reflects the latest emission year in the current CDIAC data set; as new years are added to the CDIAC data set, the uncertainty map time series can also be extended.

  7. Flood Frequency Curves - Use of information on the likelihood of extreme floods

    NASA Astrophysics Data System (ADS)

    Faber, B.

    2011-12-01

    Investment in the infrastructure that reduces flood risk for flood-prone communities must incorporate information on the magnitude and frequency of flooding in that area. Traditionally, that information has been a probability distribution of annual maximum streamflows developed from the historical gaged record at a stream site. Practice in the United States fits a Log-Pearson type3 distribution to the annual maximum flows of an unimpaired streamflow record, using the method of moments to estimate distribution parameters. The procedure makes the assumptions that annual peak streamflow events are (1) independent, (2) identically distributed, and (3) form a representative sample of the overall probability distribution. Each of these assumptions can be challenged. We rarely have enough data to form a representative sample, and therefore must compute and display the uncertainty in the estimated flood distribution. But, is there a wet/dry cycle that makes precipitation less than independent between successive years? Are the peak flows caused by different types of events from different statistical populations? How does the watershed or climate changing over time (non-stationarity) affect the probability distribution floods? Potential approaches to avoid these assumptions vary from estimating trend and shift and removing them from early data (and so forming a homogeneous data set), to methods that estimate statistical parameters that vary with time. A further issue in estimating a probability distribution of flood magnitude (the flood frequency curve) is whether a purely statistical approach can accurately capture the range and frequency of floods that are of interest. A meteorologically-based analysis produces "probable maximum precipitation" (PMP) and subsequently a "probable maximum flood" (PMF) that attempts to describe an upper bound on flood magnitude in a particular watershed. This analysis can help constrain the upper tail of the probability distribution, well beyond the range of gaged data or even historical or paleo-flood data, which can be very important in risk analyses performed for flood risk management and dam and levee safety studies.

  8. Model-driven development of covariances for spatiotemporal environmental health assessment.

    PubMed

    Kolovos, Alexander; Angulo, José Miguel; Modis, Konstantinos; Papantonopoulos, George; Wang, Jin-Feng; Christakos, George

    2013-01-01

    Known conceptual and technical limitations of mainstream environmental health data analysis have directed research to new avenues. The goal is to deal more efficiently with the inherent uncertainty and composite space-time heterogeneity of key attributes, account for multi-sourced knowledge bases (health models, survey data, empirical relationships etc.), and generate more accurate predictions across space-time. Based on a versatile, knowledge synthesis methodological framework, we introduce new space-time covariance functions built by integrating epidemic propagation models and we apply them in the analysis of existing flu datasets. Within the knowledge synthesis framework, the Bayesian maximum entropy theory is our method of choice for the spatiotemporal prediction of the ratio of new infectives (RNI) for a case study of flu in France. The space-time analysis is based on observations during a period of 15 weeks in 1998-1999. We present general features of the proposed covariance functions, and use these functions to explore the composite space-time RNI dependency. We then implement the findings to generate sufficiently detailed and informative maps of the RNI patterns across space and time. The predicted distributions of RNI suggest substantive relationships in accordance with the typical physiographic and climatologic features of the country.

  9. Mapping of uncertainty relations between continuous and discrete time

    NASA Astrophysics Data System (ADS)

    Chiuchiú, Davide; Pigolotti, Simone

    2018-03-01

    Lower bounds on fluctuations of thermodynamic currents depend on the nature of time, discrete or continuous. To understand the physical reason, we compare current fluctuations in discrete-time Markov chains and continuous-time master equations. We prove that current fluctuations in the master equations are always more likely, due to random timings of transitions. This comparison leads to a mapping of the moments of a current between discrete and continuous time. We exploit this mapping to obtain uncertainty bounds. Our results reduce the quests for uncertainty bounds in discrete and continuous time to a single problem.

  10. Mapping of uncertainty relations between continuous and discrete time.

    PubMed

    Chiuchiù, Davide; Pigolotti, Simone

    2018-03-01

    Lower bounds on fluctuations of thermodynamic currents depend on the nature of time, discrete or continuous. To understand the physical reason, we compare current fluctuations in discrete-time Markov chains and continuous-time master equations. We prove that current fluctuations in the master equations are always more likely, due to random timings of transitions. This comparison leads to a mapping of the moments of a current between discrete and continuous time. We exploit this mapping to obtain uncertainty bounds. Our results reduce the quests for uncertainty bounds in discrete and continuous time to a single problem.

  11. Statistical downscaling of mean temperature, maximum temperature, and minimum temperature on the Loess Plateau, China

    NASA Astrophysics Data System (ADS)

    Jiang, L.

    2017-12-01

    Climate change is considered to be one of the greatest environmental threats. Global climate models (GCMs) are the primary tool used for studying climate change. However, GCMs are limited because of their coarse spatial resolution and inability to resolve important sub-grid scale features such as terrain and clouds. Statistical downscaling methods can be used to downscale large-scale variables to local-scale. In this study, we assess the applicability of the Statistical Downscaling Model (SDSM) in downscaling the outputs from Beijing Normal University Earth System Model (BNU-ESM). The study focus on the the Loess Plateau, China, and the variables for downscaling include daily mean temperature (TMEAN), maximum temperature (TMAX) and minimum temperature (TMIN). The results show that SDSM performs well for these three climatic variables on the Loess Plateau. After downscaling, the root mean square errors for TMEAN, TMAX, TMIN for BNU-ESM were reduced by 70.9%, 75.1%, and 67.2%, respectively. All the rates of change in TMEAN, TMAX and TMIN during the 21st century decreased after SDSM downscaling. We also show that SDSM can effectively reduce uncertainty, compared with the raw model outputs. TMEAN uncertainty was reduced by 27.1%, 26.8%, and 16.3% for the future scenarios of RCP 2.6, RCP 4.5 and RCP 8.5, respectively. The corresponding reductions in uncertainty were 23.6%, 30.7%, and 18.7% for TMAX; 37.6%, 31.8%, and 23.2% for TMIN.

  12. Determination of seasonals using wavelets in terms of noise parameters changeability

    NASA Astrophysics Data System (ADS)

    Klos, Anna; Bogusz, Janusz; Figurski, Mariusz

    2015-04-01

    The reliable velocities of GNSS-derived observations are becoming of high importance nowadays. The fact on how we determine and subtract the seasonals may all cause the time series autocorrelation and affect uncertainties of linear parameters. The periodic changes in GNSS time series are commonly assumed as the sum of annual and semi-annual changes with amplitudes and phases being constant in time and the Least-Squares Estimation (LSE) is used in general to model these sine waves. However, not only seasonals' time-changeability, but also their higher harmonics should be considered. In this research, we focused on more than 230 globally distributed IGS stations that were processed at the Military University of Technology EPN Local Analysis Centre (MUT LAC) in Bernese 5.0 software. The network was divided into 7 different sub-networks with few of overlapping stations and processed separately with newest models. Here, we propose a wavelet-based trend and seasonals determination and removal of whole frequency spectrum between Chandler and quarter-annual periods from North, East and Up components and compare it with LSE-determined values. We used a Meyer symmetric, orthogonal wavelet and assumed nine levels of decomposition. The details from 6 up to 9 were analyzed here as periodic components with frequencies between 0.3-2.5 cpy. The characteristic oscillations for each of frequency band were pointed out. The details lower than 6 summed together with detrended approximation were considered as residua. The power spectral densities (PSDs) of original and decomposed data were stacked for North, East and Up components for each of sub-networks so as to show what power was removed with each of decomposition levels. Moreover, the noises that the certain frequency band follows (in terms of spectral indices of power-law dependencies) were estimated here using a spectral method and compared for all processed sub-networks. It seems, that lowest frequencies up to 0.7 cpy are characterized by lower spectral indices in comparison to higher ones being close to white noise. Basing on the fact, that decomposition levels overlap each other, the frequency-window choice becomes a main point in spectral index estimation. Our results were compared with those obtained by Maximum Likelihood Estimation (MLE) and possible differences as well as their impact on velocity uncertainties pointed out. The results show that the spectral indices estimated in time and frequency domains differ of 0.15 in maximum. Moreover, we compared the removed power basing on wavelet decomposition levels with the one subtracted with LSE, assuming the same periodicities. In comparison to LSE, the wavelet-based approach leaves the residua being closer to white noise with lower power-law amplitudes of them, what strictly reduces velocity uncertainties. The last approximation was analyzed here as long-term trend, being the non-linear and compared with LSE-determined linear one. It seems that these two trends differ at the level of 0.3 mm/yr in the most extreme case, what makes wavelet decomposition being useful for velocity determination.

  13. Robust Bayesian Experimental Design for Conceptual Model Discrimination

    NASA Astrophysics Data System (ADS)

    Pham, H. V.; Tsai, F. T. C.

    2015-12-01

    A robust Bayesian optimal experimental design under uncertainty is presented to provide firm information for model discrimination, given the least number of pumping wells and observation wells. Firm information is the maximum information of a system can be guaranteed from an experimental design. The design is based on the Box-Hill expected entropy decrease (EED) before and after the experiment design and the Bayesian model averaging (BMA) framework. A max-min programming is introduced to choose the robust design that maximizes the minimal Box-Hill EED subject to that the highest expected posterior model probability satisfies a desired probability threshold. The EED is calculated by the Gauss-Hermite quadrature. The BMA method is used to predict future observations and to quantify future observation uncertainty arising from conceptual and parametric uncertainties in calculating EED. Monte Carlo approach is adopted to quantify the uncertainty in the posterior model probabilities. The optimal experimental design is tested by a synthetic 5-layer anisotropic confined aquifer. Nine conceptual groundwater models are constructed due to uncertain geological architecture and boundary condition. High-performance computing is used to enumerate all possible design solutions in order to identify the most plausible groundwater model. Results highlight the impacts of scedasticity in future observation data as well as uncertainty sources on potential pumping and observation locations.

  14. Data-free and data-driven spectral perturbations for RANS UQ

    NASA Astrophysics Data System (ADS)

    Edeling, Wouter; Mishra, Aashwin; Iaccarino, Gianluca

    2017-11-01

    Despite recent developments in high-fidelity turbulent flow simulations, RANS modeling is still vastly used by industry, due to its inherent low cost. Since accuracy is a concern in RANS modeling, model-form UQ is an essential tool for assessing the impacts of this uncertainty on quantities of interest. Applying the spectral decomposition to the modeled Reynolds-Stress Tensor (RST) allows for the introduction of decoupled perturbations into the baseline intensity (kinetic energy), shape (eigenvalues), and orientation (eigenvectors). This constitutes a natural methodology to evaluate the model form uncertainty associated to different aspects of RST modeling. In a predictive setting, one frequently encounters an absence of any relevant reference data. To make data-free predictions with quantified uncertainty we employ physical bounds to a-priori define maximum spectral perturbations. When propagated, these perturbations yield intervals of engineering utility. High-fidelity data opens up the possibility of inferring a distribution of uncertainty, by means of various data-driven machine-learning techniques. We will demonstrate our framework on a number of flow problems where RANS models are prone to failure. This research was partially supported by the Defense Advanced Research Projects Agency under the Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) project (technical monitor: Dr Fariba Fahroo), and the DOE PSAAP-II program.

  15. Uncertainty Analysis in 3D Equilibrium Reconstruction

    DOE PAGES

    Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.

    2018-02-21

    Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less

  16. Uncertainty Analysis in 3D Equilibrium Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.

    Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less

  17. Exact results for the finite time thermodynamic uncertainty relation

    NASA Astrophysics Data System (ADS)

    Manikandan, Sreekanth K.; Krishnamurthy, Supriya

    2018-03-01

    We obtain exact results for the recently discovered finite-time thermodynamic uncertainty relation, for the dissipated work W d , in a stochastically driven system with non-Gaussian work statistics, both in the steady state and transient regimes, by obtaining exact expressions for any moment of W d at arbitrary times. The uncertainty function (the Fano factor of W d ) is bounded from below by 2k_BT as expected, for all times τ, in both steady state and transient regimes. The lower bound is reached at τ=0 as well as when certain system parameters vanish (corresponding to an equilibrium state). Surprisingly, we find that the uncertainty function also reaches a constant value at large τ for all the cases we have looked at. For a system starting and remaining in steady state, the uncertainty function increases monotonically, as a function of τ as well as other system parameters, implying that the large τ value is also an upper bound. For the same system in the transient regime, however, we find that the uncertainty function can have a local minimum at an accessible time τm , for a range of parameter values. The large τ value for the uncertainty function is hence not a bound in this case. The non-monotonicity suggests, rather counter-intuitively, that there might be an optimal time for the working of microscopic machines, as well as an optimal configuration in the phase space of parameter values. Our solutions show that the ratios of higher moments of the dissipated work are also bounded from below by 2k_BT . For another model, also solvable by our methods, which never reaches a steady state, the uncertainty function, is in some cases, bounded from below by a value less than 2k_BT .

  18. Late Quaternary paleoclimate of western Alaska inferred from fossil chironomids and its relation to vegetation histories

    USGS Publications Warehouse

    Kurek, Joshua; Cwynar, Les C.; Ager, Thomas A.; Abbott, Mark B.; Edwards, Mary E.

    2009-01-01

    Fossil Chironomidae assemblages (with a few Chaoboridae and Ceratopogonidae) from Zagoskin and Burial Lakes in western Alaska provide quantitative reconstructions of mean July air temperatures for periods of the late-middle Wisconsin (~39,000-34,000 cal yr B.P.) to the present. Inferred temperatures are compared with previously analyzed pollen data from each site summarized here by indirect ordination. Paleotemperature trends reveal substantial differences in the timing of climatic warming following the late Wisconsin at each site, although chronological uncertainty exists. Zagoskin Lake shows early warming beginning at about 21,000 cal yr B.P., whereas warming at Burial Lake begins ~4000 years later. Summer climates during the last glacial maximum (LGM) were on average ~3.5C° below the modern temperatures at each site. Major shifts in vegetation occurred from ~19,000 to 10,000 cal yr B.P. at Zagoskin Lake and from ~17,000 to 10,000 cal yr B.P. at Burial Lake. Vegetation shifts followed climatic warming, when temperatures neared modern values. Both sites provide evidence of an early postglacial thermal maximum at ~12,300 cal yr B.P. These chironomid records, combined with other insect-based climatic reconstructions from Beringia, indicate that during the LGM: (1) greater continentality likely influenced regions adjacent to the Bering Land Bridge and (2) summer climates were, at times, not dominated by severe cold.

  19. Optimal design criteria - prediction vs. parameter estimation

    NASA Astrophysics Data System (ADS)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  20. Using Predictive Uncertainty Analysis to Assess Hydrologic Model Performance for a Watershed in Oregon

    NASA Astrophysics Data System (ADS)

    Brannan, K. M.; Somor, A.

    2016-12-01

    A variety of statistics are used to assess watershed model performance but these statistics do not directly answer the question: what is the uncertainty of my prediction. Understanding predictive uncertainty is important when using a watershed model to develop a Total Maximum Daily Load (TMDL). TMDLs are a key component of the US Clean Water Act and specify the amount of a pollutant that can enter a waterbody when the waterbody meets water quality criteria. TMDL developers use watershed models to estimate pollutant loads from nonpoint sources of pollution. We are developing a TMDL for bacteria impairments in a watershed in the Coastal Range of Oregon. We setup an HSPF model of the watershed and used the calibration software PEST to estimate HSPF hydrologic parameters and then perform predictive uncertainty analysis of stream flow. We used Monte-Carlo simulation to run the model with 1,000 different parameter sets and assess predictive uncertainty. In order to reduce the chance of specious parameter sets, we accounted for the relationships among parameter values by using mathematically-based regularization techniques and an estimate of the parameter covariance when generating random parameter sets. We used a novel approach to select flow data for predictive uncertainty analysis. We set aside flow data that occurred on days that bacteria samples were collected. We did not use these flows in the estimation of the model parameters. We calculated a percent uncertainty for each flow observation based 1,000 model runs. We also used several methods to visualize results with an emphasis on making the data accessible to both technical and general audiences. We will use the predictive uncertainty estimates in the next phase of our work, simulating bacteria fate and transport in the watershed.

  1. Efficiency of analytical and sampling-based uncertainty propagation in intensity-modulated proton therapy

    NASA Astrophysics Data System (ADS)

    Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.

    2017-07-01

    The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.

  2. Efficiency of analytical and sampling-based uncertainty propagation in intensity-modulated proton therapy.

    PubMed

    Wahl, N; Hennig, P; Wieser, H P; Bangert, M

    2017-06-26

    The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU [Formula: see text] min). The resulting standard deviation (expectation value) of dose show average global [Formula: see text] pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.

  3. Assessment of the uncertainty in future projection for summer climate extremes over the East Asia

    NASA Astrophysics Data System (ADS)

    Park, Changyong; Min, Seung-Ki; Cha, Dong-Hyun

    2017-04-01

    Future projections of climate extremes in regional and local scales are essential information needed for better adapting to climate changes. However, future projections hold larger uncertainty factors arising from internal and external processes which reduce the projection confidence. Using CMIP5 (Coupled Model Intercomparison Project Phase 5) multi-model simulations, we assess uncertainties in future projections of the East Asian temperature and precipitation extremes focusing on summer. In examining future projection, summer mean and extreme projections of the East Asian temperature and precipitation would be larger as time. Moreover, uncertainty cascades represent wider scenario difference and inter-model ranges with increasing time. A positive mean-extreme relation is found in projections for both temperature and precipitation. For the assessment of uncertainty factors for these projections, dominant uncertainty factors from temperature and precipitation change as time. For uncertainty of mean and extreme temperature, contributions of internal variability and model uncertainty declines after mid-21st century while role of scenario uncertainty grows rapidly. For uncertainty of mean precipitation projections, internal variability is more important than the scenario uncertainty. Unlike mean precipitation, extreme precipitation shows that the scenario uncertainty is expected to be a dominant factor in 2090s. The model uncertainty holds as an important factor for both mean and extreme precipitation until late 21st century. The spatial changes for the uncertainty factors of mean and extreme projections generally are expressed according to temporal changes of the fraction of total variance from uncertainty factors in many grids of the East Asia. ACKNOWLEDGEMENTS The research was supported by the Korea Meteorological Administration Research and Development program under grant KMIPA 2015-2083 and the National Research Foundation of Korea Grant funded by the Ministry of Science, ICT and Future Planning of Korea (NRF-2016M3C4A7952637) for its support and assistant in completion of the study.

  4. Global Nitrous Oxide Emissions from Agricultural Soils: Magnitude and Uncertainties Associated with Input Data and Model Parameters

    NASA Astrophysics Data System (ADS)

    Xu, R.; Tian, H.; Pan, S.; Yang, J.; Lu, C.; Zhang, B.

    2016-12-01

    Human activities have caused significant perturbations of the nitrogen (N) cycle, resulting in about 21% increase of atmospheric N2O concentration since the pre-industrial era. This large increase is mainly caused by intensive agricultural activities including the application of nitrogen fertilizer and the expansion of leguminous crops. Substantial efforts have been made to quantify the global and regional N2O emission from agricultural soils in the last several decades using a wide variety of approaches, such as ground-based observation, atmospheric inversion, and process-based model. However, large uncertainties exist in those estimates as well as methods themselves. In this study, we used a coupled biogeochemical model (DLEM) to estimate magnitude, spatial, and temporal patterns of N2O emissions from global croplands in the past five decades (1961-2012). To estimate uncertainties associated with input data and model parameters, we have implemented a number of simulation experiments with DLEM, accounting for key parameter values that affect calculation of N2O fluxes (i.e., maximum nitrification and denitrification rates, N fixation rate, and the adsorption coefficient for soil ammonium and nitrate), different sets of input data including climate, land management practices (i.e., nitrogen fertilizer types, application rates and timings, with/without irrigation), N deposition, and land use and land cover change. This work provides a robust estimate of global N2O emissions from agricultural soils as well as identifies key gaps and limitations in the existing model and data that need to be investigated in the future.

  5. An adaptive modeling and simulation environment for combined-cycle data reconciliation and degradation estimation

    NASA Astrophysics Data System (ADS)

    Lin, Tsungpo

    Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.

  6. Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN

    NASA Astrophysics Data System (ADS)

    Talbot, Paul W.

    As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.

  7. Daily mean temperature estimate at the US SUFRAD stations as an average of the maximum and minimum temperatures

    DOE PAGES

    Chylek, Petr; Augustine, John A.; Klett, James D.; ...

    2017-09-30

    At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less

  8. Uncertainties in observations and climate projections for the North East India

    NASA Astrophysics Data System (ADS)

    Soraisam, Bidyabati; Karumuri, Ashok; D. S., Pai

    2018-01-01

    The Northeast-India has undergone many changes in climatic-vegetation related issues in the last few decades due to increased human activities. However, lack of observations makes it difficult to ascertain the climate change. The study involves the mean, seasonal cycle, trend and extreme-month analysis for summer-monsoon and winter seasons of observed climate data from Indian Meteorological Department (1° × 1°) and Aphrodite & CRU-reanalysis (both 0.5° × 0.5°), and five regional-climate-model simulations (LMDZ, MPI, GFDL, CNRM and ACCESS) data from AR5/CORDEX-South-Asia (0.5° × 0.5°). Long-term (1970-2005) observed, minimum and maximum monthly temperature and precipitation, and the corresponding CORDEX-South-Asia data for historical (1970-2005) and future-projections of RCP4.5 (2011-2060) have been analyzed for long-term trends. A large spread is found across the models in spatial distributions of various mean maximum/minimum climate statistics, though models capture a similar trend in the corresponding area-averaged seasonal cycles qualitatively. Our observational analysis broadly suggests that there is no significant trend in rainfall. Significant trends are observed in the area-averaged minimum temperature during winter. All the CORDEX-South-Asia simulations for the future project either a decreasing insignificant trend in seasonal precipitation, but increasing trend for both seasonal maximum and minimum temperature over the northeast India. The frequency of extreme monthly maximum and minimum temperature are projected to increase. It is not clear from future projections how the extreme rainfall months during JJAS may change. The results show the uncertainty exists in the CORDEX-South-Asia model projections over the region in spite of the relatively high resolution.

  9. Daily mean temperature estimate at the US SUFRAD stations as an average of the maximum and minimum temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chylek, Petr; Augustine, John A.; Klett, James D.

    At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less

  10. The Sensitivity of Earth's Climate History To Changes In The Rates of Biological And Geological Evolution

    NASA Astrophysics Data System (ADS)

    Waltham, D.

    2014-12-01

    The faint young Sun paradox (early Earth had surface liquid water despite solar luminosity 70% of the modern value) implies that our planet's albedo has increased through time and/or greenhouse warming has fallen. The obvious explanation is that negative feedback processes stabilized temperatures. However, the limited temperature data available does not exhibit the expected residual temperature rise and, at least for the Phanerozoic, estimates of climate sensitivity exceed the Planck sensitivity (the zero net-feedback value). The alternate explanation is that biological and geological evolution have tended to cool Earth through time hence countering solar-driven warming. The coincidence that Earth-evolution has roughly cancelled Solar-evolution can then be explained as an emergent property of a complex system (the Gaia hypothesis) or the result of the unavoidable observational bias that Earth's climate history must be compatible with our existence (the anthropic principle). Here, I use a simple climate model to investigate the sensitivity of Earth's climate to changes in the rate of Earth-evolution. Earth-evolution is represented by an effective emissivity which has an intrinsic variation through time (due to continental growth, the evolution of cyanobacteria, orbital fluctuations etc) plus a linear feedback term which enhances emissivity variations. An important feature of this model is a predicted maximum in the radiated-flux versus temperature function. If the increasing solar flux through time had exceeded this value then runaway warming would have occurred. For the best-guess temperature history and climate sensitivity, the Earth has always been within a few percent of this maximum. There is no obvious Gaian explanation for this flux-coincidence but the anthropic principle naturally explains it: If the rate of biological/geological evolution is naturally slow then Earth is a fortunate outlier which evolved just fast enough to avoid solar-induced over-heating. However, there are large uncertainties concerning the temperature history of our planet and concerning climate sensitivity in the Archean and Proterozoic. When these are included, the solar-flux through time might have been as little as 70-90 % of the maximum thus reducing the significance of the flux-coincidence.

  11. Beam localization in HIFU temperature measurements using thermocouples, with application to cooling by large blood vessels.

    PubMed

    Dasgupta, Subhashish; Banerjee, Rupak K; Hariharan, Prasanna; Myers, Matthew R

    2011-02-01

    Experimental studies of thermal effects in high-intensity focused ultrasound (HIFU) procedures are often performed with the aid of fine wire thermocouples positioned within tissue phantoms. Thermocouple measurements are subject to several types of error which must be accounted for before reliable inferences can be made on the basis of the measurements. Thermocouple artifact due to viscous heating is one source of error. A second is the uncertainty regarding the position of the beam relative to the target location or the thermocouple junction, due to the error in positioning the beam at the junction. This paper presents a method for determining the location of the beam relative to a fixed pair of thermocouples. The localization technique reduces the uncertainty introduced by positioning errors associated with very narrow HIFU beams. The technique is presented in the context of an investigation into the effect of blood flow through large vessels on the efficacy of HIFU procedures targeted near the vessel. Application of the beam localization method allowed conclusions regarding the effects of blood flow to be drawn from previously inconclusive (because of localization uncertainties) data. Comparison of the position-adjusted transient temperature profiles for flow rates of 0 and 400ml/min showed that blood flow can reduce temperature elevations by more than 10%, when the HIFU focus is within a 2mm distance from the vessel wall. At acoustic power levels of 17.3 and 24.8W there is a 20- to 70-fold decrease in thermal dose due to the convective cooling effect of blood flow, implying a shrinkage in lesion size. The beam-localization technique also revealed the level of thermocouple artifact as a function of sonication time, providing investigators with an indication of the quality of thermocouple data for a given exposure time. The maximum artifact was found to be double the measured temperature rise, during initial few seconds of sonication. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Adaptive framework to better characterize errors of apriori fluxes and observational residuals in a Bayesian setup for the urban flux inversions.

    NASA Astrophysics Data System (ADS)

    Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Karion, A.; Mueller, K.; Gourdji, S.; Martin, C.; Whetstone, J. R.

    2017-12-01

    The National Institute of Standards and Technology (NIST) supports the North-East Corridor Baltimore Washington (NEC-B/W) project and Indianapolis Flux Experiment (INFLUX) aiming to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties. These projects employ different flux estimation methods including top-down inversion approaches. The traditional Bayesian inversion method estimates emission distributions by updating prior information using atmospheric observations of Green House Gases (GHG) coupled to an atmospheric and dispersion model. The magnitude of the update is dependent upon the observed enhancement along with the assumed errors such as those associated with prior information and the atmospheric transport and dispersion model. These errors are specified within the inversion covariance matrices. The assumed structure and magnitude of the specified errors can have large impact on the emission estimates from the inversion. The main objective of this work is to build a data-adaptive model for these covariances matrices. We construct a synthetic data experiment using a Kalman Filter inversion framework (Lopez et al., 2017) employing different configurations of transport and dispersion model and an assumed prior. Unlike previous traditional Bayesian approaches, we estimate posterior emissions using regularized sample covariance matrices associated with prior errors to investigate whether the structure of the matrices help to better recover our hypothetical true emissions. To incorporate transport model error, we use ensemble of transport models combined with space-time analytical covariance to construct a covariance that accounts for errors in space and time. A Kalman Filter is then run using these covariances along with Maximum Likelihood Estimates (MLE) of the involved parameters. Preliminary results indicate that specifying sptio-temporally varying errors in the error covariances can improve the flux estimates and uncertainties. We also demonstrate that differences between the modeled and observed meteorology can be used to predict uncertainties associated with atmospheric transport and dispersion modeling which can help improve the skill of an inversion at urban scales.

  13. EVALUATION OF THE COMMUNITY MULTISCALE AIR QUALITY (CMAQ) MODEL VERSION 4.5: UNCERTAINTIES AND SENSITIVITIES IMPACTING MODEL PERFORMANCE: PART I - OZONE

    EPA Science Inventory

    This study examines ozone (O3) predictions from the Community Multiscale Air Quality (CMAQ) model version 4.5 and discusses potential factors influencing the model results. Daily maximum 8-hr average O3 levels are largely underpredicted when observed O...

  14. Epistemic uncertainty propagation in energy flows between structural vibrating systems

    NASA Astrophysics Data System (ADS)

    Xu, Menghui; Du, Xiaoping; Qiu, Zhiping; Wang, Chong

    2016-03-01

    A dimension-wise method for predicting fuzzy energy flows between structural vibrating systems coupled by joints with epistemic uncertainties is established. Based on its Legendre polynomial approximation at α=0, both the minimum and maximum point vectors of the energy flow of interest are calculated dimension by dimension within the space spanned by the interval parameters determined by fuzzy those at α=0 and the resulted interval bounds are used to assemble the concerned fuzzy energy flows. Besides the proposed method, vertex method as well as two current methods is also applied. Comparisons among results by different methods are accomplished by two numerical examples and the accuracy of all methods is simultaneously verified by Monte Carlo simulation.

  15. Best estimate of luminal cross-sectional area of coronary arteries from angiograms

    NASA Technical Reports Server (NTRS)

    Lee, P. L.; Selzer, R. H.

    1988-01-01

    We have reexamined the problem of estimating the luminal area of an elliptically-shaped coronary artery cross section from two or more radiographic diameter measurements. The expected error is found to be much smaller than the maximum potential error. In the cae of two orthogonal views, closed form expressions have been derived for calculating the area and the uncertainty. Assuming that the underlying ellipse has limited ellipticity (major/minor axis ratio less than five), it is shown that the average uncertainty in the area is less than 14 percent. When more than two views are available, we suggest using a least-squares fit method to extract all available information from the data.

  16. Bayesian explorations of fault slip evolution over the earthquake cycle

    NASA Astrophysics Data System (ADS)

    Duputel, Z.; Jolivet, R.; Benoit, A.; Gombert, B.

    2017-12-01

    The ever-increasing amount of geophysical data continuously opens new perspectives on fundamental aspects of the seismogenic behavior of active faults. In this context, the recent fleet of SAR satellites including Sentinel-1 and COSMO-SkyMED permits the use of InSAR for time-dependent slip modeling with unprecedented resolution in time and space. However, existing time-dependent slip models rely on spatial smoothing regularization schemes, which can produce unrealistically smooth slip distributions. In addition, these models usually do not include uncertainty estimates thereby reducing the utility of such estimates. Here, we develop an entirely new approach to derive probabilistic time-dependent slip models. This Markov-Chain Monte Carlo method involves a series of transitional steps to predict and update posterior Probability Density Functions (PDFs) of slip as a function of time. We assess the viability of our approach using various slow-slip event scenarios. Using a dense set of SAR images, we also use this method to quantify the spatial distribution and temporal evolution of slip along a creeping segment of the North Anatolian Fault. This allows us to track a shallow aseismic slip transient lasting for about a month with a maximum slip of about 2 cm.

  17. Detection of the large meteoroid/NEO flux using infrasound: recent detection of the November 21, 1995, Colorado fireball

    NASA Astrophysics Data System (ADS)

    Revelle, Douglas O.; Whitaker, Rodney W.

    1996-10-01

    During the early morning of November 21, 1995, a fireball as bright as the full moon entered the atmosphere over southeastern Colorado and initially produced audible sonic boom reports from Texas to Wyoming. The event was detected locally by a security video camera which showed the reflection of the fireball event on the hood of a truck. The camera also recorded tree shadows cast by the light of the fireball. This recording includes the audio signal of a strong double boom as well. Subsequent investigation of the array near Los Alamos, New Mexico operated by the Los Alamos National Laboratory as part of its commitment to the Comprehensive Test Ban Treaty negotiations, showed the presence of an infrasonic signal from the proper direction at about the correct time for this fireball. The Los Alamos array is a four-element infrasonic system in near-continuous operation on the laboratory property. The nominal spacing between the array elements is 212 m. The basic sensor is a Globe Universal Sciences Model 100C microphone whose response is flat from about 0.1 to 300 Hz (which we filter at the high frequency end to be limited to 20 Hz). Each low frequency microphone is connected to a set of twelve porous hoses to reduce wind noise. The characteristics of the observed signal include the onset arrival time of 0939:20 UT (0239:20 MST), with a maximum timing uncertainty of plus or minus 2 minutes, the signal onset time delay from the appearance of the fireball of 21 minutes, 20 seconds, total signal duration of 2 minutes 10 seconds, the source location determined to be toward 31 degrees from true north, the horizontal trace velocity of 429 m/sec, the signal velocity of 0.29 plus or minus 0.03 km/sec, assuming a 375 km horizontal range to the fireball, the dominant signal frequency content of 0.25 to 0.84 Hz (analyzed in the frequency interval from 0.2 to 2.0 Hz), the maximum signal cross-correlation of 0.97 and the maximum signal amplitude of 2.0 plus or minus 0.1 microbars. Also, on the basis of the signal period at maximum amplitude, we estimate a probable source energy for this event of between 10 to 100 tons of TNT (53.0 tons nominal).

  18. Robust control of burst suppression for medical coma

    NASA Astrophysics Data System (ADS)

    Westover, M. Brandon; Kim, Seong-Eun; Ching, ShiNung; Purdon, Patrick L.; Brown, Emery N.

    2015-08-01

    Objective. Medical coma is an anesthetic-induced state of brain inactivation, manifest in the electroencephalogram by burst suppression. Feedback control can be used to regulate burst suppression, however, previous designs have not been robust. Robust control design is critical under real-world operating conditions, subject to substantial pharmacokinetic and pharmacodynamic parameter uncertainty and unpredictable external disturbances. We sought to develop a robust closed-loop anesthesia delivery (CLAD) system to control medical coma. Approach. We developed a robust CLAD system to control the burst suppression probability (BSP). We developed a novel BSP tracking algorithm based on realistic models of propofol pharmacokinetics and pharmacodynamics. We also developed a practical method for estimating patient-specific pharmacodynamics parameters. Finally, we synthesized a robust proportional integral controller. Using a factorial design spanning patient age, mass, height, and gender, we tested whether the system performed within clinically acceptable limits. Throughout all experiments we subjected the system to disturbances, simulating treatment of refractory status epilepticus in a real-world intensive care unit environment. Main results. In 5400 simulations, CLAD behavior remained within specifications. Transient behavior after a step in target BSP from 0.2 to 0.8 exhibited a rise time (the median (min, max)) of 1.4 [1.1, 1.9] min; settling time, 7.8 [4.2, 9.0] min; and percent overshoot of 9.6 [2.3, 10.8]%. Under steady state conditions the CLAD system exhibited a median error of 0.1 [-0.5, 0.9]%; inaccuracy of 1.8 [0.9, 3.4]%; oscillation index of 1.8 [0.9, 3.4]%; and maximum instantaneous propofol dose of 4.3 [2.1, 10.5] mg kg-1. The maximum hourly propofol dose was 4.3 [2.1, 10.3] mg kg-1 h-1. Performance fell within clinically acceptable limits for all measures. Significance. A CLAD system designed using robust control theory achieves clinically acceptable performance in the presence of realistic unmodeled disturbances and in spite of realistic model uncertainty, while maintaining infusion rates within acceptable safety limits.

  19. Robust control of burst suppression for medical coma

    PubMed Central

    Westover, M Brandon; Kim, Seong-Eun; Ching, ShiNung; Purdon, Patrick L; Brown, Emery N

    2015-01-01

    Objective Medical coma is an anesthetic-induced state of brain inactivation, manifest in the electroencephalogram by burst suppression. Feedback control can be used to regulate burst suppression, however, previous designs have not been robust. Robust control design is critical under real-world operating conditions, subject to substantial pharmacokinetic and pharmacodynamic parameter uncertainty and unpredictable external disturbances. We sought to develop a robust closed-loop anesthesia delivery (CLAD) system to control medical coma. Approach We developed a robust CLAD system to control the burst suppression probability (BSP). We developed a novel BSP tracking algorithm based on realistic models of propofol pharmacokinetics and pharmacodynamics. We also developed a practical method for estimating patient-specific pharmacodynamics parameters. Finally, we synthesized a robust proportional integral controller. Using a factorial design spanning patient age, mass, height, and gender, we tested whether the system performed within clinically acceptable limits. Throughout all experiments we subjected the system to disturbances, simulating treatment of refractory status epilepticus in a real-world intensive care unit environment. Main results In 5400 simulations, CLAD behavior remained within specifications. Transient behavior after a step in target BSP from 0.2 to 0.8 exhibited a rise time (the median (min, max)) of 1.4 [1.1, 1.9] min; settling time, 7.8 [4.2, 9.0] min; and percent overshoot of 9.6 [2.3, 10.8]%. Under steady state conditions the CLAD system exhibited a median error of 0.1 [−0.5, 0.9]%; inaccuracy of 1.8 [0.9, 3.4]%; oscillation index of 1.8 [0.9, 3.4]%; and maximum instantaneous propofol dose of 4.3 [2.1, 10.5] mg kg−1. The maximum hourly propofol dose was 4.3 [2.1, 10.3] mg kg−1 h−1. Performance fell within clinically acceptable limits for all measures. Significance A CLAD system designed using robust control theory achieves clinically acceptable performance in the presence of realistic unmodeled disturbances and in spite of realistic model uncertainty, while maintaining infusion rates within acceptable safety limits. PMID:26020243

  20. Characterising large scenario earthquakes and their influence on NDSHA maps

    NASA Astrophysics Data System (ADS)

    Magrin, Andrea; Peresan, Antonella; Panza, Giuliano F.

    2016-04-01

    The neo-deterministic approach to seismic zoning, NDSHA, relies on physically sound modelling of ground shaking from a large set of credible scenario earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g. morphostructural features and present day deformation processes identified by Earth observations). NDSHA is based on the calculation of complete synthetic seismograms; hence it does not make use of empirical attenuation models (i.e. ground motion prediction equations). From the set of synthetic seismograms, maps of seismic hazard that describe the maximum of different ground shaking parameters at the bedrock can be produced. As a rule, the NDSHA, defines the hazard as the envelope ground shaking at the site, computed from all of the defined seismic sources; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In this way, the standard NDSHA maps permit to account for the largest observed or credible earthquake sources identified in the region in a quite straightforward manner. This study aims to assess the influence of unavoidable uncertainties in the characterisation of large scenario earthquakes on the NDSHA estimates. The treatment of uncertainties is performed by sensitivity analyses for key modelling parameters and accounts for the uncertainty in the prediction of fault radiation and in the use of Green's function for a given medium. Results from sensitivity analyses with respect to the definition of possible seismic sources are discussed. A key parameter is the magnitude of seismic sources used in the simulation, which is based on information from earthquake catalogue, seismogenic zones and seismogenic nodes. The largest part of the existing Italian catalogues is based on macroseismic intensities, a rough estimate of the error in peak values of ground motion can therefore be the factor of two, intrinsic in MCS and other discrete scales. A simple test supports this hypothesis: an increase of 0.5 in the magnitude, i.e. one degrees in epicentral MCS, of all sources used in the national scale seismic zoning produces a doubling of the maximum ground motion. The analysis of uncertainty in ground motion maps, due to the catalogue random errors in magnitude and localization, shows a not uniform distribution of ground shaking uncertainty. The available information from catalogues of past events, that is not complete and may well not be representative of future earthquakes, can be substantially completed using independent indicators of the seismogenic potential of a given area, such as active faulting data and the seismogenic nodes.

  1. Comparison of the genetic algorithm and incremental optimisation routines for a Bayesian inverse modelling based network design

    NASA Astrophysics Data System (ADS)

    Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.

    2018-05-01

    The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest that the best use of resources for the network design problem would be spent in improvement of the prior estimates of the flux uncertainties rather than investing these resources in running a complex evolutionary optimisation algorithm. The authors recommend that, if time and computational resources allow, that multiple optimisation techniques should be used as a part of a comprehensive suite of sensitivity tests when performing such an optimisation exercise. This will provide a selection of best solutions which could be ranked based on their utility and practicality.

  2. Single particle maximum likelihood reconstruction from superresolution microscopy images

    PubMed Central

    Verdier, Timothée; Gunzenhauser, Julia; Manley, Suliana; Castelnovo, Martin

    2017-01-01

    Point localization superresolution microscopy enables fluorescently tagged molecules to be imaged beyond the optical diffraction limit, reaching single molecule localization precisions down to a few nanometers. For small objects whose sizes are few times this precision, localization uncertainty prevents the straightforward extraction of a structural model from the reconstructed images. We demonstrate in the present work that this limitation can be overcome at the single particle level, requiring no particle averaging, by using a maximum likelihood reconstruction (MLR) method perfectly suited to the stochastic nature of such superresolution imaging. We validate this method by extracting structural information from both simulated and experimental PALM data of immature virus-like particles of the Human Immunodeficiency Virus (HIV-1). MLR allows us to measure the radii of individual viruses with precision of a few nanometers and confirms the incomplete closure of the viral protein lattice. The quantitative results of our analysis are consistent with previous cryoelectron microscopy characterizations. Our study establishes the framework for a method that can be broadly applied to PALM data to determine the structural parameters for an existing structural model, and is particularly well suited to heterogeneous features due to its single particle implementation. PMID:28253349

  3. Nearshore Tsunami Inundation Model Validation: Toward Sediment Transport Applications

    USGS Publications Warehouse

    Apotsos, Alex; Buckley, Mark; Gelfenbaum, Guy; Jaffe, Bruce; Vatvani, Deepak

    2011-01-01

    Model predictions from a numerical model, Delft3D, based on the nonlinear shallow water equations are compared with analytical results and laboratory observations from seven tsunami-like benchmark experiments, and with field observations from the 26 December 2004 Indian Ocean tsunami. The model accurately predicts the magnitude and timing of the measured water levels and flow velocities, as well as the magnitude of the maximum inundation distance and run-up, for both breaking and non-breaking waves. The shock-capturing numerical scheme employed describes well the total decrease in wave height due to breaking, but does not reproduce the observed shoaling near the break point. The maximum water levels observed onshore near Kuala Meurisi, Sumatra, following the 26 December 2004 tsunami are well predicted given the uncertainty in the model setup. The good agreement between the model predictions and the analytical results and observations demonstrates that the numerical solution and wetting and drying methods employed are appropriate for modeling tsunami inundation for breaking and non-breaking long waves. Extension of the model to include sediment transport may be appropriate for long, non-breaking tsunami waves. Using available sediment transport formulations, the sediment deposit thickness at Kuala Meurisi is predicted generally within a factor of 2.

  4. On the rejection of internal and external disturbances in a wind energy conversion system with direct-driven PMSG.

    PubMed

    Li, Shengquan; Zhang, Kezhao; Li, Juan; Liu, Chao

    2016-03-01

    This paper deals with the critical issue in a wind energy conversion system (WECS) based on a direct-driven permanent magnet synchronous generator (PMSG): the rejection of lumped disturbance, including the system uncertainties in the internal dynamics and unknown external forces. To simultaneously track the motor speed in real time and capture the maximum power, a maximum power point tracking strategy is proposed based on active disturbance rejection control (ADRC) theory. In real application, system inertia, drive torque and some other parameters change in a wide range with the variations of disturbances and wind speeds, which substantially degrade the performance of WECS. The ADRC design must incorporate the available model information into an extended state observer (ESO) to compensate the lumped disturbance efficiently. Based on this principle, a model-compensation ADRC is proposed in this paper. Simulation study is conducted to evaluate the performance of the proposed control strategy. It is shown that the effect of lumped disturbance is compensated in a more effective way compared with the traditional ADRC approach. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Spatiotemporal modeling of PM2.5 concentrations at the national scale combining land use regression and Bayesian maximum entropy in China.

    PubMed

    Chen, Li; Gao, Shuang; Zhang, Hui; Sun, Yanling; Ma, Zhenxing; Vedal, Sverre; Mao, Jian; Bai, Zhipeng

    2018-05-03

    Concentrations of particulate matter with aerodynamic diameter <2.5 μm (PM 2.5 ) are relatively high in China. Estimation of PM 2.5 exposure is complex because PM 2.5 exhibits complex spatiotemporal patterns. To improve the validity of exposure predictions, several methods have been developed and applied worldwide. A hybrid approach combining a land use regression (LUR) model and Bayesian Maximum Entropy (BME) interpolation of the LUR space-time residuals were developed to estimate the PM 2.5 concentrations on a national scale in China. This hybrid model could potentially provide more valid predictions than a commonly-used LUR model. The LUR/BME model had good performance characteristics, with R 2  = 0.82 and root mean square error (RMSE) of 4.6 μg/m 3 . Prediction errors of the LUR/BME model were reduced by incorporating soft data accounting for data uncertainty, with the R 2 increasing by 6%. The performance of LUR/BME is better than OK/BME. The LUR/BME model is the most accurate fine spatial scale PM 2.5 model developed to date for China. Copyright © 2018. Published by Elsevier Ltd.

  6. Prediction uncertainty and data worth assessment for groundwater transport times in an agricultural catchment

    NASA Astrophysics Data System (ADS)

    Zell, Wesley O.; Culver, Teresa B.; Sanford, Ward E.

    2018-06-01

    Uncertainties about the age of base-flow discharge can have serious implications for the management of degraded environmental systems where subsurface pathways, and the ongoing release of pollutants that accumulated in the subsurface during past decades, dominate the water quality signal. Numerical groundwater models may be used to estimate groundwater return times and base-flow ages and thus predict the time required for stakeholders to see the results of improved agricultural management practices. However, the uncertainty inherent in the relationship between (i) the observations of atmospherically-derived tracers that are required to calibrate such models and (ii) the predictions of system age that the observations inform have not been investigated. For example, few if any studies have assessed the uncertainty of numerically-simulated system ages or evaluated the uncertainty reductions that may result from the expense of collecting additional subsurface tracer data. In this study we combine numerical flow and transport modeling of atmospherically-derived tracers with prediction uncertainty methods to accomplish four objectives. First, we show the relative importance of head, discharge, and tracer information for characterizing response times in a uniquely data rich catchment that includes 266 age-tracer measurements (SF6, CFCs, and 3H) in addition to long term monitoring of water levels and stream discharge. Second, we calculate uncertainty intervals for model-simulated base-flow ages using both linear and non-linear methods, and find that the prediction sensitivity vector used by linear first-order second-moment methods results in much larger uncertainties than non-linear Monte Carlo methods operating on the same parameter uncertainty. Third, by combining prediction uncertainty analysis with multiple models of the system, we show that data-worth calculations and monitoring network design are sensitive to variations in the amount of water leaving the system via stream discharge and irrigation withdrawals. Finally, we demonstrate a novel model-averaged computation of potential data worth that can account for these uncertainties in model structure.

  7. Climate data induced uncertainty in model-based estimations of terrestrial primary productivity

    NASA Astrophysics Data System (ADS)

    Wu, Zhendong; Ahlström, Anders; Smith, Benjamin; Ardö, Jonas; Eklundh, Lars; Fensholt, Rasmus; Lehsten, Veiko

    2017-06-01

    Model-based estimations of historical fluxes and pools of the terrestrial biosphere differ substantially. These differences arise not only from differences between models but also from differences in the environmental and climatic data used as input to the models. Here we investigate the role of uncertainties in historical climate data by performing simulations of terrestrial gross primary productivity (GPP) using a process-based dynamic vegetation model (LPJ-GUESS) forced by six different climate datasets. We find that the climate induced uncertainty, defined as the range among historical simulations in GPP when forcing the model with the different climate datasets, can be as high as 11 Pg C yr-1 globally (9% of mean GPP). We also assessed a hypothetical maximum climate data induced uncertainty by combining climate variables from different datasets, which resulted in significantly larger uncertainties of 41 Pg C yr-1 globally or 32% of mean GPP. The uncertainty is partitioned into components associated to the three main climatic drivers, temperature, precipitation, and shortwave radiation. Additionally, we illustrate how the uncertainty due to a given climate driver depends both on the magnitude of the forcing data uncertainty (climate data range) and the apparent sensitivity of the modeled GPP to the driver (apparent model sensitivity). We find that LPJ-GUESS overestimates GPP compared to empirically based GPP data product in all land cover classes except for tropical forests. Tropical forests emerge as a disproportionate source of uncertainty in GPP estimation both in the simulations and empirical data products. The tropical forest uncertainty is most strongly associated with shortwave radiation and precipitation forcing, of which climate data range contributes higher to overall uncertainty than apparent model sensitivity to forcing. Globally, precipitation dominates the climate induced uncertainty over nearly half of the vegetated land area, which is mainly due to climate data range and less so due to the apparent model sensitivity. Overall, climate data ranges are found to contribute more to the climate induced uncertainty than apparent model sensitivity to forcing. Our study highlights the need to better constrain tropical climate, and demonstrates that uncertainty caused by climatic forcing data must be considered when comparing and evaluating carbon cycle model results and empirical datasets.

  8. Human Time-Frequency Acuity Beats the Fourier Uncertainty Principle

    NASA Astrophysics Data System (ADS)

    Oppenheim, Jacob N.; Magnasco, Marcelo O.

    2013-01-01

    The time-frequency uncertainty principle states that the product of the temporal and frequency extents of a signal cannot be smaller than 1/(4π). We study human ability to simultaneously judge the frequency and the timing of a sound. Our subjects often exceeded the uncertainty limit, sometimes by more than tenfold, mostly through remarkable timing acuity. Our results establish a lower bound for the nonlinearity and complexity of the algorithms employed by our brains in parsing transient sounds, rule out simple “linear filter” models of early auditory processing, and highlight timing acuity as a central feature in auditory object processing.

  9. Aerodynamic Analysis of Simulated Heat Shield Recession for the Orion Command Module

    NASA Technical Reports Server (NTRS)

    Bibb, Karen L.; Alter, Stephen J.; Mcdaniel, Ryan D.

    2008-01-01

    The aerodynamic effects of the recession of the ablative thermal protection system for the Orion Command Module of the Crew Exploration Vehicle are important for the vehicle guidance. At the present time, the aerodynamic effects of recession being handled within the Orion aerodynamic database indirectly with an additional safety factor placed on the uncertainty bounds. This study is an initial attempt to quantify the effects for a particular set of recessed geometry shapes, in order to provide more rigorous analysis for managing recession effects within the aerodynamic database. The aerodynamic forces and moments for the baseline and recessed geometries were computed at several trajectory points using multiple CFD codes, both viscous and inviscid. The resulting aerodynamics for the baseline and recessed geometries were compared. The forces (lift, drag) show negligible differences between baseline and recessed geometries. Generally, the moments show a difference between baseline and recessed geometries that correlates with the maximum amount of recession of the geometry. The difference between the pitching moments for the baseline and recessed geometries increases as Mach number decreases (and the recession is greater), and reach a value of -0.0026 for the lowest Mach number. The change in trim angle of attack increases from approx. 0.5deg at M = 28.7 to approx. 1.3deg at M = 6, and is consistent with a previous analysis with a lower fidelity engineering tool. This correlation of the present results with the engineering tool results supports the continued use of the engineering tool for future work. The present analysis suggests there does not need to be an uncertainty due to recession in the Orion aerodynamic database for the force quantities. The magnitude of the change in pitching moment due to recession is large enough to warrant inclusion in the aerodynamic database. An increment in the uncertainty for pitching moment could be calculated from these results and included in the development of the aerodynamic database uncertainty for pitching moment.

  10. Providing peak river flow statistics and forecasting in the Niger River basin

    NASA Astrophysics Data System (ADS)

    Andersson, Jafet C. M.; Ali, Abdou; Arheimer, Berit; Gustafsson, David; Minoungou, Bernard

    2017-08-01

    Flooding is a growing concern in West Africa. Improved quantification of discharge extremes and associated uncertainties is needed to improve infrastructure design, and operational forecasting is needed to provide timely warnings. In this study, we use discharge observations, a hydrological model (Niger-HYPE) and extreme value analysis to estimate peak river flow statistics (e.g. the discharge magnitude with a 100-year return period) across the Niger River basin. To test the model's capacity of predicting peak flows, we compared 30-year maximum discharge and peak flow statistics derived from the model vs. derived from nine observation stations. The results indicate that the model simulates peak discharge reasonably well (on average + 20%). However, the peak flow statistics have a large uncertainty range, which ought to be considered in infrastructure design. We then applied the methodology to derive basin-wide maps of peak flow statistics and their associated uncertainty. The results indicate that the method is applicable across the hydrologically active part of the river basin, and that the uncertainty varies substantially depending on location. Subsequently, we used the most recent bias-corrected climate projections to analyze potential changes in peak flow statistics in a changed climate. The results are generally ambiguous, with consistent changes only in very few areas. To test the forecasting capacity, we ran Niger-HYPE with a combination of meteorological data sets for the 2008 high-flow season and compared with observations. The results indicate reasonable forecasting capacity (on average 17% deviation), but additional years should also be evaluated. We finish by presenting a strategy and pilot project which will develop an operational flood monitoring and forecasting system based in-situ data, earth observations, modelling, and extreme statistics. In this way we aim to build capacity to ultimately improve resilience toward floods, protecting lives and infrastructure in the region.

  11. Evaluating uncertainties in multi-layer soil moisture estimation with support vector machines and ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Liu, Di; Mishra, Ashok K.; Yu, Zhongbo

    2016-07-01

    This paper examines the combination of support vector machines (SVM) and the dual ensemble Kalman filter (EnKF) technique to estimate root zone soil moisture at different soil layers up to 100 cm depth. Multiple experiments are conducted in a data rich environment to construct and validate the SVM model and to explore the effectiveness and robustness of the EnKF technique. It was observed that the performance of SVM relies more on the initial length of training set than other factors (e.g., cost function, regularization parameter, and kernel parameters). The dual EnKF technique proved to be efficient to improve SVM with observed data either at each time step or at a flexible time steps. The EnKF technique can reach its maximum efficiency when the updating ensemble size approaches a certain threshold. It was observed that the SVM model performance for the multi-layer soil moisture estimation can be influenced by the rainfall magnitude (e.g., dry and wet spells).

  12. Alternative descriptions of wave and particle aspects of the harmonic oscillator

    NASA Technical Reports Server (NTRS)

    Schuch, Dieter

    1993-01-01

    The dynamical properties of the wave and particle aspects of the harmonic oscillator can be studied with the help of the time-dependent Schroedinger equation (SE). Especially the time-dependence of maximum and width of Gaussian wave packet solutions allow to show the evolution and connections of those two complementary aspects. The investigation of the relations between the equations describing wave and particle aspects leads to an alternative description of the considered systems. This can be achieved by means of a Newtonian equation for a complex variable in connection with a conservation law for a nonclassical angular momentum-type quantity. With the help of this complex variable, it is also possible to develop a Hamiltonian formalism for the wave aspect contained in the SE, which allows to describe the dynamics of the position and momentum uncertainties. In this case the Hamiltonian function is equivalent to the difference between the mean value of the Hamiltonian operator and the classical Hamiltonian function.

  13. Evaluation of pollutant loads from stormwater BMPs to receiving water using load frequency curves with uncertainty analysis.

    PubMed

    Park, Daeryong; Roesner, Larry A

    2012-12-15

    This study examined pollutant loads released to receiving water from a typical urban watershed in the Los Angeles (LA) Basin of California by applying a best management practice (BMP) performance model that includes uncertainty. This BMP performance model uses the k-C model and incorporates uncertainty analysis and the first-order second-moment (FOSM) method to assess the effectiveness of BMPs for removing stormwater pollutants. Uncertainties were considered for the influent event mean concentration (EMC) and the aerial removal rate constant of the k-C model. The storage treatment overflow and runoff model (STORM) was used to simulate the flow volume from watershed, the bypass flow volume and the flow volume that passes through the BMP. Detention basins and total suspended solids (TSS) were chosen as representatives of stormwater BMP and pollutant, respectively. This paper applies load frequency curves (LFCs), which replace the exceedance percentage with an exceedance frequency as an alternative to load duration curves (LDCs), to evaluate the effectiveness of BMPs. An evaluation method based on uncertainty analysis is suggested because it applies a water quality standard exceedance based on frequency and magnitude. As a result, the incorporation of uncertainty in the estimates of pollutant loads can assist stormwater managers in determining the degree of total daily maximum load (TMDL) compliance that could be expected from a given BMP in a watershed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Clifford W.; Martin, Curtis E.

    2015-08-01

    We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature;more » (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.« less

  15. Multi-perspective analysis and spatiotemporal mapping of air pollution monitoring data.

    PubMed

    Kolovos, Alexander; Skupin, André; Jerrett, Michael; Christakos, George

    2010-09-01

    Space-time data analysis and assimilation techniques in atmospheric sciences typically consider input from monitoring measurements. The input is often processed in a manner that acknowledges characteristics of the measurements (e.g., underlying patterns, fluctuation features) under conditions of uncertainty; it also leads to the derivation of secondary information that serves study-oriented goals, and provides input to space-time prediction techniques. We present a novel approach that blends a rigorous space-time prediction model (Bayesian maximum entropy, BME) with a cognitively informed visualization of high-dimensional data (spatialization). The combined BME and spatialization approach (BME-S) is used to study monthly averaged NO2 and mean annual SO4 measurements in California over the 15-year period 1988-2002. Using the original scattered measurements of these two pollutants BME generates spatiotemporal predictions on a regular grid across the state. Subsequently, the prediction network undergoes the spatialization transformation into a lower-dimensional geometric representation, aimed at revealing patterns and relationships that exist within the input data. The proposed BME-S provides a powerful spatiotemporal framework to study a variety of air pollution data sources.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Kham, E-mail: khamdiep@gmail.com; UT MD Anderson Cancer Center, School of Health Professions—Unit 2, Houston, TX; Cummings, David

    The purpose of this study was to evaluate the differences between volumetric modulated arc therapy (VMAT) and intensity-modulated radiation therapy (IMRT) in the treatment of nasal cavity carcinomas. The treatment of 10 patients, who had completed IMRT treatment for resected tumors of the nasal cavity, was replanned with the Philips Pinnacle{sup 3} Version 9 treatment-planning system. The IMRT plans used a 9-beam technique whereas the VMAT (known as SmartArc) plans used a 3-arc technique. Both types of plans were optimized using Philips Pinnacle{sup 3} Direct Machine Parameter Optimization algorithm. IMRT and VMAT plans' quality was compared by evaluating the maximum,more » minimum, and mean doses to the target volumes and organs at risk, monitor units (MUs), and the treatment delivery time. Our results indicate that VMAT is capable of greatly reducing treatment delivery time and MUs compared with IMRT. The reduction of treatment delivery time and MUs can decrease the effects of intrafractional uncertainties that can occur because of patient movement during treatment delivery. VMAT's plans further reduce doses to critical structures that are in close proximity to the target volume.« less

  17. Neural Imaging Using Single-Photon Avalanche Diodes

    PubMed Central

    Karami, Mohammad Azim; Ansarian, Misagh

    2017-01-01

    Introduction: This paper analyses the ability of single-photon avalanche diodes (SPADs) for neural imaging. The current trend in the production of SPADs moves toward the minimum dark count rate (DCR) and maximum photon detection probability (PDP). Moreover, the jitter response which is the main measurement characteristic for the timing uncertainty is progressing. Methods: The neural imaging process using SPADs can be performed by means of florescence lifetime imaging (FLIM), time correlated single-photon counting (TCSPC), positron emission tomography (PET), and single-photon emission computed tomography (SPECT). Results: This trend will result in more precise neural imaging cameras. While achieving low DCR SPADs is difficult in deep submicron technologies because of using higher doping profiles, higher PDPs are reported in green and blue part of light. Furthermore, the number of pixels integrated in the same chip is increasing with the technology progress which can result in the higher resolution of imaging. Conclusion: This study proposes implemented SPADs in Deep-submicron technologies to be used in neural imaging cameras, due to the small size pixels and higher timing accuracies. PMID:28446946

  18. Evolution of seismicity in relation to fluid injection in the North-Western part of The Geysers geothermal field

    NASA Astrophysics Data System (ADS)

    Leptokaropoulos, Konstantinos; Staszek, Monika; Lasocki, Stanisław; Martínez-Garzón, Patricia; Kwiatek, Grzegorz

    2018-02-01

    The Geysers geothermal field located in California, USA, is the largest geothermal site in the world, operating since the 1960s. We here investigate and quantify the correlation between temporal seismicity evolution and variation of the injection data by examination of time-series through specified statistical tools (binomial test to investigate significant rate changes, cross correlation between seismic and injection data, b-value variation analysis). To do so, we utilize seismicity and operational data associated with two injection wells (Prati-9 and Prati-29) which cover a time period of approximately 7 yr (from November 2007 to August 2014). The seismicity is found to be significantly positively correlated with the injection rate. The maximum correlation occurs with a seismic response delay of ˜2 weeks, following injection operations. Those results are very stable even after considering hypocentral uncertainties, by applying a vertical shift of the events foci up to 300 m. Our analysis indicates also time variations of b-value, which exhibits significant positive correlation with injection rates.

  19. Measurement of optical to electrical and electrical to optical delays with ps-level uncertainty.

    PubMed

    Peek, H Z; Pinkert, T J; Jansweijer, P P M; Koelemeij, J C J

    2018-05-28

    We present a new measurement principle to determine the absolute time delay of a waveform from an optical reference plane to an electrical reference plane and vice versa. We demonstrate a method based on this principle with 2 ps uncertainty. This method can be used to perform accurate time delay determinations of optical transceivers used in fiber-optic time-dissemination equipment. As a result the time scales in optical and electrical domain can be related to each other with the same uncertainty. We expect this method will be a new breakthrough in high-accuracy time transfer and absolute calibration of time-transfer equipment.

  20. Quantifying the uncertainty introduced by discretization and time-averaging in two-fluid model predictions

    DOE PAGES

    Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane

    2017-07-12

    The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less

  1. PRECISE TULLY-FISHER RELATIONS WITHOUT GALAXY INCLINATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Obreschkow, D.; Meyer, M.

    2013-11-10

    Power-law relations between tracers of baryonic mass and rotational velocities of disk galaxies, so-called Tully-Fisher relations (TFRs), offer a wealth of applications in galaxy evolution and cosmology. However, measurements of rotational velocities require galaxy inclinations, which are difficult to measure, thus limiting the range of TFR studies. This work introduces a maximum likelihood estimation (MLE) method for recovering the TFR in galaxy samples with limited or no information on inclinations. The robustness and accuracy of this method is demonstrated using virtual and real galaxy samples. Intriguingly, the MLE reliably recovers the TFR of all test samples, even without using anymore » inclination measurements—that is, assuming a random sin i-distribution for galaxy inclinations. Explicitly, this 'inclination-free MLE' recovers the three TFR parameters (zero-point, slope, scatter) with statistical errors only about 1.5 times larger than the best estimates based on perfectly known galaxy inclinations with zero uncertainty. Thus, given realistic uncertainties, the inclination-free MLE is highly competitive. If inclination measurements have mean errors larger than 10°, it is better not to use any inclinations than to consider the inclination measurements to be exact. The inclination-free MLE opens interesting perspectives for future H I surveys by the Square Kilometer Array and its pathfinders.« less

  2. Deformation Monitoring of the Submillimetric UPV Calibration Baseline

    NASA Astrophysics Data System (ADS)

    García-Asenjo, Luis; Baselga, Sergio; Garrigues, Pascual

    2017-06-01

    A 330 m calibration baseline was established at the Universitat Politècnica de València (UPV) in 2007. Absolute scale was subsequently transferred in 2012 from the Nummela Standard Baseline in Finland and distances between pillars were determined with uncertainties ranging from 0.1 mm to 0.3 mm. In order to assess the long-term stability of the baseline three field campaigns were carried out from 2013 to 2015 in a co-operative effort with the Universidad Complutense de Madrid (UCM), which provided the only Mekometer ME5000 distance meter available in Spain. Since the application of the ISO17123-4 full procedure did not suffice to come to a definite conclusion about possible displacements of the pillars, we opted for the traditional geodetic network approach. This approach had to be adapted to the case at hand in order to deal with problems such as the geometric weakness inherent to calibration baselines and scale uncertainty derived from both the use of different instruments and the high correlation between the meteorological correction and scale determination. Additionally, the so-called the maximum number of stable points method was also tested. In this contribution it is described the process followed to assess the stability of the UPV submillimetric calibration baseline during the period of time from 2012 to 2015.

  3. Minimum and Maximum Times Required to Obtain Representative Suspended Sediment Samples

    NASA Astrophysics Data System (ADS)

    Gitto, A.; Venditti, J. G.; Kostaschuk, R.; Church, M. A.

    2014-12-01

    Bottle sampling is a convenient method of obtaining suspended sediment measurements for the development of sediment budgets. While these methods are generally considered to be reliable, recent analysis of depth-integrated sampling has identified considerable uncertainty in measurements of grain-size concentration between grain-size classes of multiple samples. Point-integrated bottle sampling is assumed to represent the mean concentration of suspended sediment but the uncertainty surrounding this method is not well understood. Here we examine at-a-point variability in velocity, suspended sediment concentration, grain-size distribution, and grain-size moments to determine if traditional point-integrated methods provide a representative sample of suspended sediment. We present continuous hour-long observations of suspended sediment from the sand-bedded portion of the Fraser River at Mission, British Columbia, Canada, using a LISST laser-diffraction instrument. Spectral analysis suggests that there are no statistically significant peak in energy density, suggesting the absence of periodic fluctuations in flow and suspended sediment. However, a slope break in the spectra at 0.003 Hz corresponds to a period of 5.5 minutes. This coincides with the threshold between large-scale turbulent eddies that scale with channel width/mean velocity and hydraulic phenomena related to channel dynamics. This suggests that suspended sediment samples taken over a period longer than 5.5 minutes incorporate variability that is larger scale than turbulent phenomena in this channel. Examination of 5.5-minute periods of our time series indicate that ~20% of the time a stable mean value of volumetric concentration is reached within 30 seconds, a typical bottle sample duration. In ~12% of measurements a stable mean was not reached over the 5.5 minute sample duration. The remaining measurements achieve a stable mean in an even distribution over the intervening interval.

  4. Control of thermal therapies with moving power deposition field.

    PubMed

    Arora, Dhiraj; Minor, Mark A; Skliar, Mikhail; Roemer, Robert B

    2006-03-07

    A thermal therapy feedback control approach to control thermal dose using a moving power deposition field is developed and evaluated using simulations. A normal tissue safety objective is incorporated in the controller design by imposing constraints on temperature elevations at selected normal tissue locations. The proposed control technique consists of two stages. The first stage uses a model-based sliding mode controller that dynamically generates an 'ideal' power deposition profile which is generally unrealizable with available heating modalities. Subsequently, in order to approximately realize this spatially distributed idealized power deposition, a constrained quadratic optimizer is implemented to compute intensities and dwell times for a set of pre-selected power deposition fields created by a scanned focused transducer. The dwell times for various power deposition profiles are dynamically generated online as opposed to the commonly employed a priori-decided heating strategies. Dynamic intensity and trajectory generation safeguards the treatment outcome against modelling uncertainties and unknown disturbances. The controller is designed to enforce simultaneous activation of multiple normal tissue temperature constraints by rapidly switching between various power deposition profiles. The hypothesis behind the controller design is that the simultaneous activation of multiple constraints substantially reduces treatment time without compromising normal tissue safety. The controller performance and robustness with respect to parameter uncertainties is evaluated using simulations. The results demonstrate that the proposed controller can successfully deliver the desired thermal dose to the target while maintaining the temperatures at the user-specified normal tissue locations at or below the maximum allowable values. Although demonstrated for the case of a scanned focused ultrasound transducer, the developed approach can be extended to other heating modalities with moving deposition fields, such as external and interstitial ultrasound phased arrays, multiple radiofrequency needle applicators and microwave antennae.

  5. Mixed oxidizer hybrid propulsion system optimization under uncertainty using applied response surface methodology and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Whitehead, James Joshua

    The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.

  6. The impact of (n, γ) reaction rate uncertainties of unstable isotopes near N = 50 on the i-process nucleosynthesis in He-shell flash white dwarfs

    NASA Astrophysics Data System (ADS)

    Denissenkov, Pavel; Perdikakis, Georgios; Herwig, Falk; Schatz, Hendrik; Ritter, Christian; Pignatari, Marco; Jones, Samuel; Nikas, Stylianos; Spyrou, Artemis

    2018-05-01

    The first-peak s-process elements Rb, Sr, Y and Zr in the post-AGB star Sakurai's object (V4334 Sagittarii) have been proposed to be the result of i-process nucleosynthesis in a post-AGB very-late thermal pulse event. We estimate the nuclear physics uncertainties in the i-process model predictions to determine whether the remaining discrepancies with observations are significant and point to potential issues with the underlying astrophysical model. We find that the dominant source in the nuclear physics uncertainties are predictions of neutron capture rates on unstable neutron rich nuclei, which can have uncertainties of more than a factor 20 in the band of the i-process. We use a Monte Carlo variation of 52 neutron capture rates and a 1D multi-zone post-processing model for the i-process in Sakurai's object to determine the cumulative effect of these uncertainties on the final elemental abundance predictions. We find that the nuclear physics uncertainties are large and comparable to observational errors. Within these uncertainties the model predictions are consistent with observations. A correlation analysis of the results of our MC simulations reveals that the strongest impact on the predicted abundances of Rb, Sr, Y and Zr is made by the uncertainties in the (n, γ) reaction rates of 85Br, 86Br, 87Kr, 88Kr, 89Kr, 89Rb, 89Sr, and 92Sr. This conclusion is supported by a series of multi-zone simulations in which we increased and decreased to their maximum and minimum limits one or two reaction rates per run. We also show that simple and fast one-zone simulations should not be used instead of more realistic multi-zone stellar simulations for nuclear sensitivity and uncertainty studies of convective–reactive processes. Our findings apply more generally to any i-process site with similar neutron exposure, such as rapidly accreting white dwarfs with near-solar metallicities.

  7. First Reprocessing of Southern Hemisphere ADditional OZonesondes Profile Records: 3. Uncertainty in Ozone Profile and Total Column

    NASA Astrophysics Data System (ADS)

    Witte, Jacquelyn C.; Thompson, Anne M.; Smit, Herman G. J.; Vömel, Holger; Posny, Françoise; Stübi, Rene

    2018-03-01

    Reprocessed ozonesonde data from eight SHADOZ (Southern Hemisphere ADditional OZonesondes) sites have been used to derive the first analysis of uncertainty estimates for both profile and total column ozone (TCO). The ozone uncertainty is a composite of the uncertainties of the individual terms in the ozone partial pressure (PO3) equation, those being the ozone sensor current, background current, internal pump temperature, pump efficiency factors, conversion efficiency, and flow rate. Overall, PO3 uncertainties (ΔPO3) are within 15% and peak around the tropopause (15 ± 3 km) where ozone is a minimum and ΔPO3 approaches the measured signal. The uncertainty in the background and sensor currents dominates the overall ΔPO3 in the troposphere including the tropopause region, while the uncertainties in the conversion efficiency and flow rate dominate in the stratosphere. Seasonally, ΔPO3 is generally a maximum in the March-May, with the exception of SHADOZ sites in Asia, for which the highest ΔPO3 occurs in September-February. As a first approach, we calculate sonde TCO uncertainty (ΔTCO) by integrating the profile ΔPO3 and adding the ozone residual uncertainty, derived from the McPeters and Labow (2012, doi:10.1029/2011JD017006) 1σ ozone mixing ratios. Overall, ΔTCO are within ±15 Dobson units (DU), representing 5-6% of the TCO. Total Ozone Mapping Spectrometer and Ozone Monitoring Instrument (TOMS and OMI) satellite overpasses are generally within the sonde ΔTCO. However, there is a discontinuity between TOMS v8.6 (1998 to September 2004) and OMI (October 2004-2016) TCO on the order of 10 DU that accounts for the significant 16 DU overall difference observed between sonde and TOMS. By comparison, the sonde-OMI absolute difference for the eight stations is only 4 DU.

  8. Cost Recommendation under Uncertainty in IQWiG's Efficiency Frontier Framework.

    PubMed

    Corro Ramos, Isaac; Lhachimi, Stefan K; Gerber-Grote, Andreas; Al, Maiwenn J

    2017-02-01

    The National Institute for Quality and Efficiency in Health Care (IQWiG) employs an efficiency frontier (EF) framework to facilitate setting maximum reimbursable prices for new interventions. Probabilistic sensitivity analysis (PSA) is used when yes/no reimbursement decisions are sought based on a fixed threshold. In the IQWiG framework, an additional layer of complexity arises as the EF itself may vary its shape in each PSA iteration, and thus the willingness-to-pay, indicated by the EF segments, may vary. To explore the practical problems arising when, within the EF approach, maximum reimbursable prices for new interventions are sought through PSA. When the EF is varied in a PSA, cost recommendations for new interventions may be determined by the mean or the median of the distances between each intervention's point estimate and each EF. Implications of using these metrics were explored in a simulation study based on the model used by IQWiG to assess the cost-effectiveness of 4 antidepressants. Depending on the metric used, cost recommendations can be contradictory. Recommendations based on the mean can also be inconsistent. Results (median) suggested that costs of duloxetine, venlafaxine, mirtazapine, and bupropion should be decreased by €131, €29, €12, and €99, respectively. These recommendations were implemented and the analysis repeated. New results suggested keeping the costs as they were. The percentage of acceptable PSA outcomes increased 41% on average, and the uncertainty associated to the net health benefit was significantly reduced. The median of the distances between every intervention outcome and every EF is a good proxy for the cost recommendation that would be given should the EF be fixed. Adjusting costs according to the median increased the probability of acceptance and reduced the uncertainty around the net health benefit distribution, resulting in a reduced uncertainty for decision makers.

  9. Analyzing ROC curves using the effective set-size model

    NASA Astrophysics Data System (ADS)

    Samuelson, Frank W.; Abbey, Craig K.; He, Xin

    2018-03-01

    The Effective Set-Size model has been used to describe uncertainty in various signal detection experiments. The model regards images as if they were an effective number (M*) of searchable locations, where the observer treats each location as a location-known-exactly detection task with signals having average detectability d'. The model assumes a rational observer behaves as if he searches an effective number of independent locations and follows signal detection theory at each location. Thus the location-known-exactly detectability (d') and the effective number of independent locations M* fully characterize search performance. In this model the image rating in a single-response task is assumed to be the maximum response that the observer would assign to these many locations. The model has been used by a number of other researchers, and is well corroborated. We examine this model as a way of differentiating imaging tasks that radiologists perform. Tasks involving more searching or location uncertainty may have higher estimated M* values. In this work we applied the Effective Set-Size model to a number of medical imaging data sets. The data sets include radiologists reading screening and diagnostic mammography with and without computer-aided diagnosis (CAD), and breast tomosynthesis. We developed an algorithm to fit the model parameters using two-sample maximum-likelihood ordinal regression, similar to the classic bi-normal model. The resulting model ROC curves are rational and fit the observed data well. We find that the distributions of M* and d' differ significantly among these data sets, and differ between pairs of imaging systems within studies. For example, on average tomosynthesis increased readers' d' values, while CAD reduced the M* parameters. We demonstrate that the model parameters M* and d' are correlated. We conclude that the Effective Set-Size model may be a useful way of differentiating location uncertainty from the diagnostic uncertainty in medical imaging tasks.

  10. Comparison of Seasonal Terrestrial Water Storage Variations from GRACE with Groundwater-level Measurements from the High Plains Aquifer (USA)

    NASA Technical Reports Server (NTRS)

    Strassberg, Gil; Scanlon, Bridget R.; Rodell, Matthew

    2007-01-01

    This study presents the first direct comparison of variations in seasonal GWS derived from GRACE TWS and simulated SM with GW-level measurements in a semiarid region. Results showed that variations in GWS and SM are the main sources controlling TWS changes over the High Plains, with negligible storage changes from surface water, snow, and biomass. Seasonal variations in GRACE TWS compare favorably with combined GWS from GW-level measurements (total 2,700 wells, average 1,050 GW-level measurements per season) and simulated SM from the Noah land surface model (R = 0.82, RMSD = 33 mm). Estimated uncertainty in seasonal GRACE-derived TWS is 8 mm, and estimated uncertainty in TWS changes is 11 mm. Estimated uncertainty in SM changes is 11 mm and combined uncertainty for TWS-SM changes is 15 mm. Seasonal TWS changes are detectable in 7 out of 9 monitored periods and maximum changes within a year (e.g. between winter and summer) are detectable in all 5 monitored periods. Grace-derived GWS calculated from TWS-SM generally agrees with estimates based on GW-level measurements (R = 0.58, RMSD = 33 mm). Seasonal TWS-SM changes are detectable in 5 out of the 9 monitored periods and maximum changes are detectable in all 5 monitored periods. Good correspondence between GRACE data and GW-level measurements from the intensively monitored High Plains aquifer validates the potential for using GRACE TWS and simulated SM to monitor GWS changes and aquifer depletion in semiarid regions subjected to intensive irrigation pumpage. This method can be used to monitor regions where large-scale aquifer depletion is ongoing, and in situ measurements are limited, such as the North China Plain or western India. This potential should be enhanced by future advances in GRACE processing, which will improve the spatial and temporal resolution of TWS changes, and will further increase applicability of GRACE data for monitoring GWS.

  11. Cost-effective water quality assessment through the integration of monitoring data and modeling results

    NASA Astrophysics Data System (ADS)

    Lobuglio, Joseph N.; Characklis, Gregory W.; Serre, Marc L.

    2007-03-01

    Sparse monitoring data and error inherent in water quality models make the identification of waters not meeting regulatory standards uncertain. Additional monitoring can be implemented to reduce this uncertainty, but it is often expensive. These costs are currently a major concern, since developing total maximum daily loads, as mandated by the Clean Water Act, will require assessing tens of thousands of water bodies across the United States. This work uses the Bayesian maximum entropy (BME) method of modern geostatistics to integrate water quality monitoring data together with model predictions to provide improved estimates of water quality in a cost-effective manner. This information includes estimates of uncertainty and can be used to aid probabilistic-based decisions concerning the status of a water (i.e., impaired or not impaired) and the level of monitoring needed to characterize the water for regulatory purposes. This approach is applied to the Catawba River reservoir system in western North Carolina as a means of estimating seasonal chlorophyll a concentration. Mean concentration and confidence intervals for chlorophyll a are estimated for 66 reservoir segments over an 11-year period (726 values) based on 219 measured seasonal averages and 54 model predictions. Although the model predictions had a high degree of uncertainty, integration of modeling results via BME methods reduced the uncertainty associated with chlorophyll estimates compared with estimates made solely with information from monitoring efforts. Probabilistic predictions of future chlorophyll levels on one reservoir are used to illustrate the cost savings that can be achieved by less extensive and rigorous monitoring methods within the BME framework. While BME methods have been applied in several environmental contexts, employing these methods as a means of integrating monitoring and modeling results, as well as application of this approach to the assessment of surface water monitoring networks, represent unexplored areas of research.

  12. Is my bottom-up uncertainty estimation on metal measurement adequate?

    NASA Astrophysics Data System (ADS)

    Marques, J. R.; Faustino, M. G.; Monteiro, L. R.; Ulrich, J. C.; Pires, M. A. F.; Cotrim, M. E. B.

    2018-03-01

    Is the estimated uncertainty under GUM recommendation associated with metal measurement adequately estimated? How to evaluate if the measurement uncertainty really covers all uncertainty that is associated with the analytical procedure? Considering that, many laboratories frequently underestimate or less frequently overestimate uncertainties on its results; this paper presents the evaluation of estimated uncertainties on two ICP-OES procedures of seven metal measurements according to GUM approach. Horwitz function and proficiency tests scaled standard uncertainties were used in this evaluation. Our data shows that most elements expanded uncertainties were from two to four times underestimated. Possible causes and corrections are discussed herein.

  13. Mapping (dis)agreement in hydrologic projections

    NASA Astrophysics Data System (ADS)

    Melsen, Lieke A.; Addor, Nans; Mizukami, Naoki; Newman, Andrew J.; Torfs, Paul J. J. F.; Clark, Martyn P.; Uijlenhoet, Remko; Teuling, Adriaan J.

    2018-03-01

    Hydrologic projections are of vital socio-economic importance. However, they are also prone to uncertainty. In order to establish a meaningful range of storylines to support water managers in decision making, we need to reveal the relevant sources of uncertainty. Here, we systematically and extensively investigate uncertainty in hydrologic projections for 605 basins throughout the contiguous US. We show that in the majority of the basins, the sign of change in average annual runoff and discharge timing for the period 2070-2100 compared to 1985-2008 differs among combinations of climate models, hydrologic models, and parameters. Mapping the results revealed that different sources of uncertainty dominate in different regions. Hydrologic model induced uncertainty in the sign of change in mean runoff was related to snow processes and aridity, whereas uncertainty in both mean runoff and discharge timing induced by the climate models was related to disagreement among the models regarding the change in precipitation. Overall, disagreement on the sign of change was more widespread for the mean runoff than for the discharge timing. The results demonstrate the need to define a wide range of quantitative hydrologic storylines, including parameter, hydrologic model, and climate model forcing uncertainty, to support water resource planning.

  14. Preliminary analysis of hot spot factors in an advanced reactor for space electric power systems

    NASA Technical Reports Server (NTRS)

    Lustig, P. H.; Holms, A. G.; Davison, H. W.

    1973-01-01

    The maximum fuel pin temperature for nominal operation in an advanced power reactor is 1370 K. Because of possible nitrogen embrittlement of the clad, the fuel temperature was limited to 1622 K. Assuming simultaneous occurrence of the most adverse conditions a deterministic analysis gave a maximum fuel temperature of 1610 K. A statistical analysis, using a synthesized estimate of the standard deviation for the highest fuel pin temperature, showed probabilities of 0.015 of that pin exceeding the temperature limit by the distribution free Chebyshev inequality and virtually nil assuming a normal distribution. The latter assumption gives a 1463 K maximum temperature at 3 standard deviations, the usually assumed cutoff. Further, the distribution and standard deviation of the fuel-clad gap are the most significant contributions to the uncertainty in the fuel temperature.

  15. A matched-peak inversion approach for ocean acoustic travel-time tomography

    PubMed

    Skarsoulis

    2000-03-01

    A new approach for the inversion of travel-time data is proposed, based on the matching between model arrivals and observed peaks. Using the linearized model relations between sound-speed and arrival-time perturbations about a set of background states, arrival times and associated errors are calculated on a fine grid of model states discretizing the sound-speed parameter space. Each model state can explain (identify) a number of observed peaks in a particular reception lying within the uncertainty intervals of the corresponding predicted arrival times. The model states that explain the maximum number of observed peaks are considered as the more likely parametric descriptions of the reception; these model states can be described in terms of mean values and variances providing a statistical answer (matched-peak solution) to the inversion problem. A basic feature of the matched-peak inversion approach is that each reception can be treated independently, i.e., no constraints are posed from previous-reception identification or inversion results. Accordingly, there is no need for initialization of the inversion procedure and, furthermore, discontinuous travel-time data can be treated. The matched-peak inversion method is demonstrated by application to 9-month-long travel-time data from the Thetis-2 tomography experiment in the western Mediterranean sea.

  16. Ramp time synchronization. [for NASA Deep Space Network

    NASA Technical Reports Server (NTRS)

    Hietzke, W.

    1979-01-01

    A new method of intercontinental clock synchronization has been developed and proposed for possible use by NASA's Deep Space Network (DSN), using a two-way/three-way radio link with a spacecraft. Analysis of preliminary data indicates that the real-time method has an uncertainty of 0.6 microsec, and it is very likely that further work will decrease the uncertainty. Also, the method is compatible with a variety of nonreal-time analysis techniques, which may reduce the uncertainty down to the tens of nanosecond range.

  17. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    NASA Astrophysics Data System (ADS)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model. This complex model then serves as the basis to compare simpler model structures. Through this approach, predictive uncertainty can be quantified relative to a known reference solution.

  18. Uncertainty in recharge estimation: impact on groundwater vulnerability assessments for the Pearl Harbor Basin, O'ahu, Hawai'i, U.S.A.

    NASA Astrophysics Data System (ADS)

    Giambelluca, Thomas W.; Loague, Keith; Green, Richard E.; Nullet, Michael A.

    1996-06-01

    In this paper, uncertainty in recharge estimates is investigated relative to its impact on assessments of groundwater contamination vulnerability using a relatively simple pesticide mobility index, attenuation factor (AF). We employ a combination of first-order uncertainty analysis (FOUA) and sensitivity analysis to investigate recharge uncertainties for agricultural land on the island of O'ahu, Hawai'i, that is currently, or has been in the past, under sugarcane or pineapple cultivation. Uncertainty in recharge due to recharge component uncertainties is 49% of the mean for sugarcane and 58% of the mean for pineapple. The components contributing the largest amounts of uncertainty to the recharge estimate are irrigation in the case of sugarcane and precipitation in the case of pineapple. For a suite of pesticides formerly or currently used in the region, the contribution to AF uncertainty of recharge uncertainty was compared with the contributions of other AF components: retardation factor (RF), a measure of the effects of sorption; soil-water content at field capacity (ΘFC); and pesticide half-life (t1/2). Depending upon the pesticide, the contribution of recharge to uncertainty ranks second or third among the four AF components tested. The natural temporal variability of recharge is another source of uncertainty in AF, because the index is calculated using the time-averaged recharge rate. Relative to the mean, recharge variability is 10%, 44%, and 176% for the annual, monthly, and daily time scales, respectively, under sugarcane, and 31%, 112%, and 344%, respectively, under pineapple. In general, uncertainty in AF associated with temporal variability in recharge at all time scales exceeds AF. For chemicals such as atrazine or diuron under sugarcane, and atrazine or bromacil under pineapple, the range of AF uncertainty due to temporal variability in recharge encompasses significantly higher levels of leaching potential at some locations than that indicated by the AF estimate.

  19. Uncertainty analysis of accident notification time and emergency medical service response time in work zone traffic accidents.

    PubMed

    Meng, Qiang; Weng, Jinxian

    2013-01-01

    Taking into account the uncertainty caused by exogenous factors, the accident notification time (ANT) and emergency medical service (EMS) response time were modeled as 2 random variables following the lognormal distribution. Their mean values and standard deviations were respectively formulated as the functions of environmental variables including crash time, road type, weekend, holiday, light condition, weather, and work zone type. Work zone traffic accident data from the Fatality Analysis Report System between 2002 and 2009 were utilized to determine the distributions of the ANT and the EMS arrival time in the United States. A mixed logistic regression model, taking into account the uncertainty associated with the ANT and the EMS response time, was developed to estimate the risk of death. The results showed that the uncertainty of the ANT was primarily influenced by crash time and road type, whereas the uncertainty of EMS response time is greatly affected by road type, weather, and light conditions. In addition, work zone accidents occurring during a holiday and in poor light conditions were found to be statistically associated with a longer mean ANT and longer EMS response time. The results also show that shortening the ANT was a more effective approach in reducing the risk of death than the EMS response time in work zones. To shorten the ANT and the EMS response time, work zone activities are suggested to be undertaken during non-holidays, during the daytime, and in good weather and light conditions.

  20. The impact of inter-annual variability of annual cycle on long-term persistence of surface air temperature in long historical records

    NASA Astrophysics Data System (ADS)

    Deng, Qimin; Nian, Da; Fu, Zuntao

    2018-02-01

    Previous studies in the literature show that the annual cycle of surface air temperature (SAT) is changing in both amplitude and phase, and the SAT departures from the annual cycle are long-term correlated. However, the classical definition of temperature anomalies is based on the assumption that the annual cycle is constant, which contradicts the fact of changing annual cycle. How to quantify the impact of the changing annual cycle on the long-term correlation of temperature anomaly variability still remains open. In this paper, a recently developed data adaptive analysis tool, the nonlinear mode decomposition (NMD), is used to extract and remove time-varying annual cycle to reach the new defined temperature anomalies in which time-dependent amplitude of annual cycle has been considered. By means of detrended fluctuation analysis, the impact induced by inter-annual variability from the time-dependent amplitude of annual cycle has been quantified on the estimation of long-term correlation of long historical temperature anomalies in Europe. The results show that the classical climatology annual cycle is supposed to lack inter-annual fluctuation which will lead to a maximum artificial deviation centering around 600 days. This maximum artificial deviation is crucial to defining the scaling range and estimating the long-term persistence exponent accurately. Selecting different scaling range could lead to an overestimation or underestimation of the long-term persistence exponent. By using NMD method to extract the inter-annual fluctuations of annual cycle, this artificial crossover can be weakened to extend a wider scaling range with fewer uncertainties.

  1. Dynamics of entanglement and uncertainty relation in coupled harmonic oscillator system: exact results

    NASA Astrophysics Data System (ADS)

    Park, DaeKil

    2018-06-01

    The dynamics of entanglement and uncertainty relation is explored by solving the time-dependent Schrödinger equation for coupled harmonic oscillator system analytically when the angular frequencies and coupling constant are arbitrarily time dependent. We derive the spectral and Schmidt decompositions for vacuum solution. Using the decompositions, we derive the analytical expressions for von Neumann and Rényi entropies. Making use of Wigner distribution function defined in phase space, we derive the time dependence of position-momentum uncertainty relations. To show the dynamics of entanglement and uncertainty relation graphically, we introduce two toy models and one realistic quenched model. While the dynamics can be conjectured by simple consideration in the toy models, the dynamics in the realistic quenched model is somewhat different from that in the toy models. In particular, the dynamics of entanglement exhibits similar pattern to dynamics of uncertainty parameter in the realistic quenched model.

  2. Precise measurement of the performance of thermoelectric modules

    NASA Astrophysics Data System (ADS)

    Díaz-Chao, Pablo; Muñiz-Piniella, Andrés; Selezneva, Ekaterina; Cuenat, Alexandre

    2016-08-01

    The potential exploitation of thermoelectric modules into mass market applications such as exhaust gas heat recovery in combustion engines requires an accurate knowledge of their performance. Further expansion of the market will also require confidence on the results provided by suppliers to end-users. However, large variation in performance and maximum operating point is observed for identical modules when tested by different laboratories. Here, we present the first metrological study of the impact of mounting and testing procedures on the precision of thermoelectric modules measurement. Variability in the electrical output due to mechanical pressure or type of thermal interface materials is quantified for the first time. The respective contribution of the temperature difference and the mean temperature to the variation in the output performance is quantified. The contribution of these factors to the total uncertainties in module characterisation is detailed.

  3. Online stochastic optimization of radiotherapy patient scheduling.

    PubMed

    Legrain, Antoine; Fortin, Marie-Andrée; Lahrichi, Nadia; Rousseau, Louis-Martin

    2015-06-01

    The effective management of a cancer treatment facility for radiation therapy depends mainly on optimizing the use of the linear accelerators. In this project, we schedule patients on these machines taking into account their priority for treatment, the maximum waiting time before the first treatment, and the treatment duration. We collaborate with the Centre Intégré de Cancérologie de Laval to determine the best scheduling policy. Furthermore, we integrate the uncertainty related to the arrival of patients at the center. We develop a hybrid method combining stochastic optimization and online optimization to better meet the needs of central planning. We use information on the future arrivals of patients to provide an accurate picture of the expected utilization of resources. Results based on real data show that our method outperforms the policies typically used in treatment centers.

  4. [Evaluation of pollution of an urban area by level of heavy metals in snow cover].

    PubMed

    Stepanova, N V; Khamitova, R Ia; Petrova, R S

    2003-01-01

    The goal of this study was to systematize various methodological approaches to evaluating the contamination of the snow cover with heavy metals (HM) by using Kazan, an industrial city with diversified industry, as an example. The findings suggest that it is necessary to characterize the contamination of the snow cover by the actual entrance of an element per area unit of the snow cover for a definite period of time rather than by the concentration of TM in the volume unit of snow water (mg/l), which minimizes the uncertainties with spatial and temporary snow cover variations. The index of the maximum allowable entrance, which is of practical value, may be used to objectively calibrate the pollution of the snow cover, by estimating the amount of a coming element and its toxicity.

  5. Pinhole occulter experiment

    NASA Technical Reports Server (NTRS)

    Ring, Jeff; Pflug, John

    1987-01-01

    Viewgraphs and charts from a briefing summarize the accomplishments, results, conclusions, and recommendations of a feasibility study using the Pinhole Occulter Facility (POF). Accomplishments for 1986 include: (1) improved IPS Gimbal Model; (2) improved Crew Motion Disturbance Model; (3) use of existing shuttle on-orbit simulation to study the effects of orbiter attitude deadband size on POF performance; (4) increased understanding of maximum performance expected from current actuator/sensor set; (5) use of TREETOPS nonlinear time domain program to obtain system dynamics describing the complex multibody flexible structures; (6) use of HONEY-X design tool to design and evaluate multivariable compensator for stability, robustness, and performance; (7) application of state-of-the-art compensator design methodology Linear Quadratic Gaussian/Loop Transfer Recovery (LQG/LTR); and (8) examination of tolerance required on knowledge of the POF boom flexible mode frequencies to insure stability, using structure uncertainty analysis.

  6. Castings Airworthiness

    DTIC Science & Technology

    1989-05-01

    also drawn up by Vereinigte Flugtech- nischen Werken as a technoeconomic analysis . The aerospace industry will be looking for the following...ultimate load without rupture. These conditions are verified by analysis and test. The limit load is the maximum load the vehicle is expected to see...the uncertainty factor (formerly known as the factor of safety) which accounts for variations in manufacture, defects not assumed in analysis but

  7. Estimates of Zenith Total Delay trends from GPS reprocessing with autoregressive process

    NASA Astrophysics Data System (ADS)

    Klos, Anna; Hunegnaw, Addisu; Teferle, Felix Norman; Ebuy Abraha, Kibrom; Ahmed, Furqan; Bogusz, Janusz

    2017-04-01

    Nowadays, near real-time Zenith Total Delay (ZTD) estimates from Global Positioning System (GPS) observations are routinely assimilated into numerical weather prediction (NWP) models to improve the reliability of forecasts. On the other hand, ZTD time series derived from homogeneously re-processed GPS observations over long periods have the potential to improve our understanding of climate change on various temporal and spatial scales. With such time series only recently reaching somewhat adequate time spans, the application of GPS-derived ZTD estimates to climate monitoring is still to be developed further. In this research, we examine the character of noise in ZTD time series for 1995-2015 in order to estimate more realistic magnitudes of trend and its uncertainty than would be the case if the stochastic properties are not taken into account. Furthermore, the hourly sampled, homogeneously re-processed and carefully homogenized ZTD time series from over 700 globally distributed stations were classified into five major climate zones. We found that the amplitudes of annual signals reach values of 10-150 mm with minimum values for the polar and Alpine zones. The amplitudes of daily signals were estimated to be 0-12 mm with maximum values found for the dry zone. We examined seven different noise models for the residual ZTD time series after modelling all known periodicities. This identified a combination of white plus autoregressive process of fourth order to be optimal to match all changes in power of the ZTD data. When the stochastic properties are neglected, ie. a pure white noise model is employed, only 11 from 120 trends were insignificant. Using the optimum noise model more than half of the 120 examined trends became insignificant. We show that the uncertainty of ZTD trends is underestimated by a factor of 3-12 when the stochastic properties of the ZTD time series are ignored and we conclude that it is essential to properly model the noise characteristics of such time series when interpretations in terms of climate change are to be performed.

  8. A Gaussian Processes Technique for Short-term Load Forecasting with Considerations of Uncertainty

    NASA Astrophysics Data System (ADS)

    Ohmi, Masataro; Mori, Hiroyuki

    In this paper, an efficient method is proposed to deal with short-term load forecasting with the Gaussian Processes. Short-term load forecasting plays a key role to smooth power system operation such as economic load dispatching, unit commitment, etc. Recently, the deregulated and competitive power market increases the degree of uncertainty. As a result, it is more important to obtain better prediction results to save the cost. One of the most important aspects is that power system operator needs the upper and lower bounds of the predicted load to deal with the uncertainty while they require more accurate predicted values. The proposed method is based on the Bayes model in which output is expressed in a distribution rather than a point. To realize the model efficiently, this paper proposes the Gaussian Processes that consists of the Bayes linear model and kernel machine to obtain the distribution of the predicted value. The proposed method is successively applied to real data of daily maximum load forecasting.

  9. Pricing of medical devices under coverage uncertainty--a modelling approach.

    PubMed

    Girling, Alan J; Lilford, Richard J; Young, Terry P

    2012-12-01

    Product vendors and manufacturers are increasingly aware that purchasers of health care will fund new clinical treatments only if they are perceived to deliver value-for-money. This influences companies' internal commercial decisions, including the price they set for their products. Other things being equal, there is a price threshold, which is the maximum price at which the device will be funded and which, if its value were known, would play a central role in price determination. This paper examines the problem of pricing a medical device from the vendor's point of view in the presence of uncertainty about what the price threshold will be. A formal solution is obtained by maximising the expected value of the net revenue function, assuming a Bayesian prior distribution for the price threshold. A least admissible price is identified. The model can also be used as a tool for analysing proposed pricing policies when no formal prior specification of uncertainty is available. Copyright © 2011 John Wiley & Sons, Ltd.

  10. Assessment of statistical uncertainty in the quantitative analysis of solid samples in motion using laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Cabalín, L. M.; González, A.; Ruiz, J.; Laserna, J. J.

    2010-08-01

    Statistical uncertainty in the quantitative analysis of solid samples in motion by laser-induced breakdown spectroscopy (LIBS) has been assessed. For this purpose, a LIBS demonstrator was designed and constructed in our laboratory. The LIBS system consisted of a laboratory-scale conveyor belt, a compact optical module and a Nd:YAG laser operating at 532 nm. The speed of the conveyor belt was variable and could be adjusted up to a maximum speed of 2 m s - 1 . Statistical uncertainty in the analytical measurements was estimated in terms of precision (reproducibility and repeatability) and accuracy. The results obtained by LIBS on shredded scrap samples under real conditions have demonstrated that the analytical precision and accuracy of LIBS is dependent on the sample geometry, position on the conveyor belt and surface cleanliness. Flat, relatively clean scrap samples exhibited acceptable reproducibility and repeatability; by contrast, samples with an irregular shape or a dirty surface exhibited a poor relative standard deviation.

  11. Late-stage pharmaceutical R&D and pricing policies under two-stage regulation.

    PubMed

    Jobjörnsson, Sebastian; Forster, Martin; Pertile, Paolo; Burman, Carl-Fredrik

    2016-12-01

    We present a model combining the two regulatory stages relevant to the approval of a new health technology: the authorisation of its commercialisation and the insurer's decision about whether to reimburse its cost. We show that the degree of uncertainty concerning the true value of the insurer's maximum willingness to pay for a unit increase in effectiveness has a non-monotonic impact on the optimal price of the innovation, the firm's expected profit and the optimal sample size of the clinical trial. A key result is that there exists a range of values of the uncertainty parameter over which a reduction in uncertainty benefits the firm, the insurer and patients. We consider how different policy parameters may be used as incentive mechanisms, and the incentives to invest in R&D for marginal projects such as those targeting rare diseases. The model is calibrated using data on a new treatment for cystic fibrosis. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Deterministic physical systems under uncertain initial conditions: the case of maximum entropy applied to projectile motion

    NASA Astrophysics Data System (ADS)

    Montecinos, Alejandra; Davis, Sergio; Peralta, Joaquín

    2018-07-01

    The kinematics and dynamics of deterministic physical systems have been a foundation of our understanding of the world since Galileo and Newton. For real systems, however, uncertainty is largely present via external forces such as friction or lack of precise knowledge about the initial conditions of the system. In this work we focus on the latter case and describe the use of inference methodologies in solving the statistical properties of classical systems subject to uncertain initial conditions. In particular we describe the application of the formalism of maximum entropy (MaxEnt) inference to the problem of projectile motion, given information about the average horizontal range over many realizations. By using MaxEnt we can invert the problem and use the provided information on the average range to reduce the original uncertainty in the initial conditions. Also, additional insight into the initial condition's probabilities, and the projectile path distribution itself, can be achieved based on the value of the average horizontal range. The wide applicability of this procedure, as well as its ease of use, reveals a useful tool with which to revisit a large number of physics problems, from classrooms to frontier research.

  13. Low-thrust chemical propulsion system propellant expulsion and thermal conditioning study

    NASA Technical Reports Server (NTRS)

    Merino, F.; Wakabayashi, I.; Pleasant, R. L.; Hill, M.

    1982-01-01

    Thermal conditioning systems for satisfying engine net positive suction pressure (NPSP) requirements, and propellant expulsion systems for achieving propellant dump during a return-to-launch site (RTLS) abort were studied for LH2/LO2 and LCH4/LO2 upper stage propellant combinations. A state-of-the-art thermal conditioning system employing helium injection beneath the liquid surface shows the lowest weight penalty for LO2 and LCH4. A technology system incorporating a thermal subcooler (heat exchanger) for engine NPSP results in the lowest weight penalty for the LH2 tank. A preliminary design of two state-of-the-art and two new technology systems indicates a weight penalty difference too small to warrant development of a LH2 thermal subcooler. Analysis results showed that the LH2/LO2 propellant expulsion system is optimized for maximum dump line diameters, whereas the LCH4/LO2 system is optimized for minimum dump line diameter (LCH4) and maximum dump line diameter (LO2). The primary uncertainty is the accurate determination of two-phase flow rates through the dump system; experimentation is not recommended because this uncertainty is not considered significant.

  14. Inverse modeling of geochemical and mechanical compaction in sedimentary basins

    NASA Astrophysics Data System (ADS)

    Colombo, Ivo; Porta, Giovanni Michele; Guadagnini, Alberto

    2015-04-01

    We study key phenomena driving the feedback between sediment compaction processes and fluid flow in stratified sedimentary basins formed through lithification of sand and clay sediments after deposition. Processes we consider are mechanic compaction of the host rock and the geochemical compaction due to quartz cementation in sandstones. Key objectives of our study include (i) the quantification of the influence of the uncertainty of the model input parameters on the model output and (ii) the application of an inverse modeling technique to field scale data. Proper accounting of the feedback between sediment compaction processes and fluid flow in the subsurface is key to quantify a wide set of environmentally and industrially relevant phenomena. These include, e.g., compaction-driven brine and/or saltwater flow at deep locations and its influence on (a) tracer concentrations observed in shallow sediments, (b) build up of fluid overpressure, (c) hydrocarbon generation and migration, (d) subsidence due to groundwater and/or hydrocarbons withdrawal, and (e) formation of ore deposits. Main processes driving the diagenesis of sediments after deposition are mechanical compaction due to overburden and precipitation/dissolution associated with reactive transport. The natural evolution of sedimentary basins is characterized by geological time scales, thus preventing direct and exhaustive measurement of the system dynamical changes. The outputs of compaction models are plagued by uncertainty because of the incomplete knowledge of the models and parameters governing diagenesis. Development of robust methodologies for inverse modeling and parameter estimation under uncertainty is therefore crucial to the quantification of natural compaction phenomena. We employ a numerical methodology based on three building blocks: (i) space-time discretization of the compaction process; (ii) representation of target output variables through a Polynomial Chaos Expansion (PCE); and (iii) model inversion (parameter estimation) within a maximum likelihood framework. In this context, the PCE-based surrogate model enables one to (i) minimize the computational cost associated with the (forward and inverse) modeling procedures leading to uncertainty quantification and parameter estimation, and (ii) compute the full set of Sobol indices quantifying the contribution of each uncertain parameter to the variability of target state variables. Results are illustrated through the simulation of one-dimensional test cases. The analyses focuses on the calibration of model parameters through literature field cases. The quality of parameter estimates is then analyzed as a function of number, type and location of data.

  15. Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion

    NASA Astrophysics Data System (ADS)

    Li, Z.; Ghaith, M.

    2017-12-01

    Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.

  16. Uncertainty factors in screening ecological risk assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duke, L.D.; Taggart, M.

    2000-06-01

    The hazard quotient (HQ) method is commonly used in screening ecological risk assessments (ERAs) to estimate risk to wildlife at contaminated sites. Many ERAs use uncertainty factors (UFs) in the HQ calculation to incorporate uncertainty associated with predicting wildlife responses to contaminant exposure using laboratory toxicity data. The overall objective was to evaluate the current UF methodology as applied to screening ERAs in California, USA. Specific objectives included characterizing current UF methodology, evaluating the degree of conservatism in UFs as applied, and identifying limitations to the current approach. Twenty-four of 29 evaluated ERAs used the HQ approach: 23 of thesemore » used UFs in the HQ calculation. All 24 made interspecies extrapolations, and 21 compensated for its uncertainty, most using allometric adjustments and some using RFs. Most also incorporated uncertainty for same-species extrapolations. Twenty-one ERAs used UFs extrapolating from lowest observed adverse effect level (LOAEL) to no observed adverse effect level (NOAEL), and 18 used UFs extrapolating from subchronic to chronic exposure. Values and application of all UF types were inconsistent. Maximum cumulative UFs ranged from 10 to 3,000. Results suggest UF methodology is widely used but inconsistently applied and is not uniformly conservative relative to UFs recommended in regulatory guidelines and academic literature. The method is limited by lack of consensus among scientists, regulators, and practitioners about magnitudes, types, and conceptual underpinnings of the UF methodology.« less

  17. Robust Decision Making to Support Water Quality Climate Adaptation: a Case Study in the Chesapeake Bay Watershed

    NASA Astrophysics Data System (ADS)

    Fischbach, J. R.; Lempert, R. J.; Molina-Perez, E.

    2017-12-01

    The U.S. Environmental Protection Agency (USEPA), together with state and local partners, develops watershed implementation plans designed to meet water quality standards. Climate uncertainty, along with uncertainty about future land use changes or the performance of water quality best management practices (BMPs), may make it difficult for these implementation plans to meet water quality goals. In this effort, we explored how decision making under deep uncertainty (DMDU) methods such as Robust Decision Making (RDM) could help USEPA and its partners develop implementation plans that are more robust to future uncertainty. The study focuses on one part of the Chesapeake Bay watershed, the Patuxent River, which is 2,479 sq km in area, highly urbanized, and has a rapidly growing population. We simulated the contribution of stormwater contaminants from the Patuxent to the overall Total Maximum Daily Load (TMDL) for the Chesapeake Bay under multiple scenarios reflecting climate and other uncertainties. Contaminants considered included nitrogen, phosphorus, and sediment loads. The assessment included a large set of scenario simulations using the USEPA Chesapeake Bay Program's Phase V watershed model. Uncertainties represented in the analysis included 18 downscaled climate projections (based on 6 general circulation models and 3 emissions pathways), 12 land use scenarios with different population projections and development patterns, and alternative assumptions about BMP performance standards and efficiencies associated with different suites of stormwater BMPs. Finally, we developed cost estimates for each of the performance standards and compared cost to TMDL performance as a key tradeoff for future water quality management decisions. In this talk, we describe how this research can help inform climate-related decision support at USEPA's Chesapeake Bay Program, and more generally how RDM and other DMDU methods can support improved water quality management under climate uncertainty.

  18. Measurement uncertainty associated with chromatic confocal profilometry for 3D surface texture characterization of natural human enamel.

    PubMed

    Mullan, F; Bartlett, D; Austin, R S

    2017-06-01

    To investigate the measurement performance of a chromatic confocal profilometer for quantification of surface texture of natural human enamel in vitro. Contributions to the measurement uncertainty from all potential sources of measurement error using a chromatic confocal profilometer and surface metrology software were quantified using a series of surface metrology calibration artifacts and pre-worn enamel samples. The 3D surface texture analysis protocol was optimized across 0.04mm 2 of natural and unpolished enamel undergoing dietary acid erosion (pH 3.2, titratable acidity 41.3mmolOH/L). Flatness deviations due to the x, y stage mechanical movement were the major contribution to the measurement uncertainty; with maximum Sz flatness errors of 0.49μm. Whereas measurement noise; non-linearity's in x, y, z and enamel sample dimensional instability contributed minimal errors. The measurement errors were propagated into an uncertainty budget following a Type B uncertainty evaluation in order to calculate the Standard Combined Uncertainty (u c ), which was ±0.28μm. Statistically significant increases in the median (IQR) roughness (Sa) of the polished samples occurred after 15 (+0.17 (0.13)μm), 30 (+0.12 (0.09)μm) and 45 (+0.18 (0.15)μm) min of erosion (P<0.001 vs. baseline). In contrast, natural unpolished enamel samples revealed a statistically significant decrease in Sa roughness of -0.14 (0.34) μm only after 45min erosion (P<0.05s vs. baseline). The main contribution to measurement uncertainty using chromatic confocal profilometry was from flatness deviations however by optimizing measurement protocols the profilometer successfully characterized surface texture changes in enamel from erosive wear in vitro. Copyright © 2017 The Academy of Dental Materials. All rights reserved.

  19. Affective decision making under uncertainty during a plausible aviation task: an fMRI study.

    PubMed

    Causse, Mickaël; Péran, Patrice; Dehais, Frédéric; Caravasso, Chiara Falletta; Zeffiro, Thomas; Sabatini, Umberto; Pastor, Josette

    2013-05-01

    In aeronautics, plan continuation error (PCE) represents failure to revise a flight plan despite emerging evidence suggesting that it is no longer safe. Assuming that PCE may be associated with a shift from cold to hot reasoning, we hypothesized that this transition may result from a large range of strong negative emotional influences linked with the decision to abort a landing and circle for a repeat attempt, referred to as a "go-around". We investigated this hypothesis by combining functional neuroimaging with an ecologically valid aviation task performed under contextual variation in incentive and situational uncertainty. Our goal was to identify regional brain activity related to the sorts of conservative or liberal decision-making strategies engaged when participants were both exposed to a financial payoff matrix constructed to bias responses in favor of landing acceptance, while they were simultaneously experiencing maximum levels of uncertainty related to high levels of stimulus ambiguity. Combined with the observed behavioral outcomes, our neuroimaging results revealed a shift from cold to hot decision making in response to high uncertainty when participants were exposed to the financial incentive. Most notably, while we observed activity increases in response to uncertainty in many frontal regions such as dorsolateral prefrontal cortex (DLPFC) and anterior cingulate cortex (ACC), less overall activity was observed when the reward was combined with uncertainty. Moreover, participants with poor decision making, quantified as a lower discriminability index d', exhibited riskier behavior coupled with lower activity in the right DLPFC. These outcomes suggest a disruptive effect of biased financial incentive and high uncertainty on the rational decision-making neural network, and consequently, on decision relevance. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. GCR Environmental Models III: GCR Model Validation and Propagated Uncertainties in Effective Dose

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Xu, Xiaojing; Blattnig, Steve R.; Norman, Ryan B.

    2014-01-01

    This is the last of three papers focused on quantifying the uncertainty associated with galactic cosmic rays (GCR) models used for space radiation shielding applications. In the first paper, it was found that GCR ions with Z>2 and boundary energy below 500 MeV/nucleon induce less than 5% of the total effective dose behind shielding. This is an important finding since GCR model development and validation have been heavily biased toward Advanced Composition Explorer/Cosmic Ray Isotope Spectrometer measurements below 500 MeV/nucleon. Weights were also developed that quantify the relative contribution of defined GCR energy and charge groups to effective dose behind shielding. In the second paper, it was shown that these weights could be used to efficiently propagate GCR model uncertainties into effective dose behind shielding. In this work, uncertainties are quantified for a few commonly used GCR models. A validation metric is developed that accounts for measurements uncertainty, and the metric is coupled to the fast uncertainty propagation method. For this work, the Badhwar-O'Neill (BON) 2010 and 2011 and the Matthia GCR models are compared to an extensive measurement database. It is shown that BON2011 systematically overestimates heavy ion fluxes in the range 0.5-4 GeV/nucleon. The BON2010 and BON2011 also show moderate and large errors in reproducing past solar activity near the 2000 solar maximum and 2010 solar minimum. It is found that all three models induce relative errors in effective dose in the interval [-20%, 20%] at a 68% confidence level. The BON2010 and Matthia models are found to have similar overall uncertainty estimates and are preferred for space radiation shielding applications.

Top