Performance evaluation of spectral vegetation indices using a statistical sensitivity function
Ji, Lei; Peters, Albert J.
2007-01-01
A great number of spectral vegetation indices (VIs) have been developed to estimate biophysical parameters of vegetation. Traditional techniques for evaluating the performance of VIs are regression-based statistics, such as the coefficient of determination and root mean square error. These statistics, however, are not capable of quantifying the detailed relationship between VIs and biophysical parameters because the sensitivity of a VI is usually a function of the biophysical parameter instead of a constant. To better quantify this relationship, we developed a “sensitivity function” for measuring the sensitivity of a VI to biophysical parameters. The sensitivity function is defined as the first derivative of the regression function, divided by the standard error of the dependent variable prediction. The function elucidates the change in sensitivity over the range of the biophysical parameter. The Student's t- or z-statistic can be used to test the significance of VI sensitivity. Additionally, we developed a “relative sensitivity function” that compares the sensitivities of two VIs when the biophysical parameters are unavailable.
NASA Astrophysics Data System (ADS)
Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.
2016-12-01
This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.
Pant, Sanjay
2018-05-01
A new class of functions, called the 'information sensitivity functions' (ISFs), which quantify the information gain about the parameters through the measurements/observables of a dynamical system are presented. These functions can be easily computed through classical sensitivity functions alone and are based on Bayesian and information-theoretic approaches. While marginal information gain is quantified by decrease in differential entropy, correlations between arbitrary sets of parameters are assessed through mutual information. For individual parameters, these information gains are also presented as marginal posterior variances, and, to assess the effect of correlations, as conditional variances when other parameters are given. The easy to interpret ISFs can be used to (a) identify time intervals or regions in dynamical system behaviour where information about the parameters is concentrated; (b) assess the effect of measurement noise on the information gain for the parameters; (c) assess whether sufficient information in an experimental protocol (input, measurements and their frequency) is available to identify the parameters; (d) assess correlation in the posterior distribution of the parameters to identify the sets of parameters that are likely to be indistinguishable; and (e) assess identifiability problems for particular sets of parameters. © 2018 The Authors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, Justin Matthew
These are the slides for a graduate presentation at Mississippi State University. It covers the following: the BRL Shaped-Charge Geometry in PAGOSA, mesh refinement study, surrogate modeling using a radial basis function network (RBFN), ruling out parameters using sensitivity analysis (equation of state study), uncertainty quantification (UQ) methodology, and sensitivity analysis (SA) methodology. In summary, a mesh convergence study was used to ensure that solutions were numerically stable by comparing PDV data between simulations. A Design of Experiments (DOE) method was used to reduce the simulation space to study the effects of the Jones-Wilkins-Lee (JWL) Parameters for the Composition Bmore » main charge. Uncertainty was quantified by computing the 95% data range about the median of simulation output using a brute force Monte Carlo (MC) random sampling method. Parameter sensitivities were quantified using the Fourier Amplitude Sensitivity Test (FAST) spectral analysis method where it was determined that detonation velocity, initial density, C1, and B1 controlled jet tip velocity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groen, E.A., E-mail: Evelyne.Groen@gmail.com; Heijungs, R.; Leiden University, Einsteinweg 2, Leiden 2333 CC
Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlationsmore » between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.« less
Quantifying uncertainty and sensitivity in sea ice models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego Blanco, Jorge Rolando; Hunke, Elizabeth Clare; Urban, Nathan Mark
The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; ...
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.
Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean
NASA Astrophysics Data System (ADS)
Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.
2011-12-01
Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling parameter for the aerosols. The estimation method is computationally fast and can be used with more complex models where climate sensitivity is diagnosed rather than prescribed. The parameter estimates can be used to create probabilistic climate projections using the UVic ESCM model in future studies.
NASA Technical Reports Server (NTRS)
Rosero, Enrique; Yang, Zong-Liang; Wagener, Thorsten; Gulden, Lindsey E.; Yatheendradas, Soni; Niu, Guo-Yue
2009-01-01
We use sensitivity analysis to identify the parameters that are most responsible for shaping land surface model (LSM) simulations and to understand the complex interactions in three versions of the Noah LSM: the standard version (STD), a version enhanced with a simple groundwater module (GW), and version augmented by a dynamic phenology module (DV). We use warm season, high-frequency, near-surface states and turbulent fluxes collected over nine sites in the US Southern Great Plains. We quantify changes in the pattern of sensitive parameters, the amount and nature of the interaction between parameters, and the covariance structure of the distribution of behavioral parameter sets. Using Sobol s total and first-order sensitivity indexes, we show that very few parameters directly control the variance of the model output. Significant parameter interaction occurs so that not only the optimal parameter values differ between models, but the relationships between parameters change. GW decreases parameter interaction and appears to improve model realism, especially at wetter sites. DV increases parameter interaction and decreases identifiability, implying it is overparameterized and/or underconstrained. A case study at a wet site shows GW has two functional modes: one that mimics STD and a second in which GW improves model function by decoupling direct evaporation and baseflow. Unsupervised classification of the posterior distributions of behavioral parameter sets cannot group similar sites based solely on soil or vegetation type, helping to explain why transferability between sites and models is not straightforward. This evidence suggests a priori assignment of parameters should also consider climatic differences.
Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, J. R.; Urban, N. M.
2015-12-01
Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.
Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, WanYin; Zhang, Jie; Florita, Anthony
2015-12-08
Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance,more » cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.« less
Using sensitivity analysis in model calibration efforts
Tiedeman, Claire; Hill, Mary C.
2003-01-01
In models of natural and engineered systems, sensitivity analysis can be used to assess relations among system state observations, model parameters, and model predictions. The model itself links these three entities, and model sensitivities can be used to quantify the links. Sensitivities are defined as the derivatives of simulated quantities (such as simulated equivalents of observations, or model predictions) with respect to model parameters. We present four measures calculated from model sensitivities that quantify the observation-parameter-prediction links and that are especially useful during the calibration and prediction phases of modeling. These four measures are composite scaled sensitivities (CSS), prediction scaled sensitivities (PSS), the value of improved information (VOII) statistic, and the observation prediction (OPR) statistic. These measures can be used to help guide initial calibration of models, collection of field data beneficial to model predictions, and recalibration of models updated with new field information. Once model sensitivities have been calculated, each of the four measures requires minimal computational effort. We apply the four measures to a three-layer MODFLOW-2000 (Harbaugh et al., 2000; Hill et al., 2000) model of the Death Valley regional ground-water flow system (DVRFS), located in southern Nevada and California. D’Agnese et al. (1997, 1999) developed and calibrated the model using nonlinear regression methods. Figure 1 shows some of the observations, parameters, and predictions for the DVRFS model. Observed quantities include hydraulic heads and spring flows. The 23 defined model parameters include hydraulic conductivities, vertical anisotropies, recharge rates, evapotranspiration rates, and pumpage. Predictions of interest for this regional-scale model are advective transport paths from potential contamination sites underlying the Nevada Test Site and Yucca Mountain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Chen, Xingyuan; Ye, Ming
Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level ofmore » the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.« less
Song, Jiyun; Wang, Zhi-Hua
2015-01-01
An advanced Markov-Chain Monte Carlo approach called Subset Simulation is described in Au and Beck (2001) [1] was used to quantify parameter uncertainty and model sensitivity of the urban land-atmospheric framework, viz. the coupled urban canopy model-single column model (UCM-SCM). The results show that the atmospheric dynamics are sensitive to land surface conditions. The most sensitive parameters are dimensional parameters, i.e. roof width, aspect ratio, roughness length of heat and momentum, since these parameters control the magnitude of sensible heat flux. The relative insensitive parameters are hydrological parameters since the lawns or green roofs in urban areas are regularly irrigated so that the water availability for evaporation is never constrained. PMID:26702421
NASA Astrophysics Data System (ADS)
Maina, Fadji Zaouna; Guadagnini, Alberto
2018-01-01
We study the contribution of typically uncertain subsurface flow parameters to gravity changes that can be recorded during pumping tests in unconfined aquifers. We do so in the framework of a Global Sensitivity Analysis and quantify the effects of uncertainty of such parameters on the first four statistical moments of the probability distribution of gravimetric variations induced by the operation of the well. System parameters are grouped into two main categories, respectively, governing groundwater flow in the unsaturated and saturated portions of the domain. We ground our work on the three-dimensional analytical model proposed by Mishra and Neuman (2011), which fully takes into account the richness of the physical process taking place across the unsaturated and saturated zones and storage effects in a finite radius pumping well. The relative influence of model parameter uncertainties on drawdown, moisture content, and gravity changes are quantified through (a) the Sobol' indices, derived from a classical decomposition of variance and (b) recently developed indices quantifying the relative contribution of each uncertain model parameter to the (ensemble) mean, skewness, and kurtosis of the model output. Our results document (i) the importance of the effects of the parameters governing the unsaturated flow dynamics on the mean and variance of local drawdown and gravity changes; (ii) the marked sensitivity (as expressed in terms of the statistical moments analyzed) of gravity changes to the employed water retention curve model parameter, specific yield, and storage, and (iii) the influential role of hydraulic conductivity of the unsaturated and saturated zones to the skewness and kurtosis of gravimetric variation distributions. The observed temporal dynamics of the strength of the relative contribution of system parameters to gravimetric variations suggest that gravity data have a clear potential to provide useful information for estimating the key hydraulic parameters of the system.
Material and morphology parameter sensitivity analysis in particulate composite materials
NASA Astrophysics Data System (ADS)
Zhang, Xiaoyu; Oskay, Caglar
2017-12-01
This manuscript presents a novel parameter sensitivity analysis framework for damage and failure modeling of particulate composite materials subjected to dynamic loading. The proposed framework employs global sensitivity analysis to study the variance in the failure response as a function of model parameters. In view of the computational complexity of performing thousands of detailed microstructural simulations to characterize sensitivities, Gaussian process (GP) surrogate modeling is incorporated into the framework. In order to capture the discontinuity in response surfaces, the GP models are integrated with a support vector machine classification algorithm that identifies the discontinuities within response surfaces. The proposed framework is employed to quantify variability and sensitivities in the failure response of polymer bonded particulate energetic materials under dynamic loads to material properties and morphological parameters that define the material microstructure. Particular emphasis is placed on the identification of sensitivity to interfaces between the polymer binder and the energetic particles. The proposed framework has been demonstrated to identify the most consequential material and morphological parameters under vibrational and impact loads.
Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes
NASA Astrophysics Data System (ADS)
Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias
2015-04-01
Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten
2016-06-08
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less
NASA Astrophysics Data System (ADS)
Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang
2016-06-01
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks
Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544
Sensitivity-based virtual fields for the non-linear virtual fields method
NASA Astrophysics Data System (ADS)
Marek, Aleksander; Davis, Frances M.; Pierron, Fabrice
2017-09-01
The virtual fields method is an approach to inversely identify material parameters using full-field deformation data. In this manuscript, a new set of automatically-defined virtual fields for non-linear constitutive models has been proposed. These new sensitivity-based virtual fields reduce the influence of noise on the parameter identification. The sensitivity-based virtual fields were applied to a numerical example involving small strain plasticity; however, the general formulation derived for these virtual fields is applicable to any non-linear constitutive model. To quantify the improvement offered by these new virtual fields, they were compared with stiffness-based and manually defined virtual fields. The proposed sensitivity-based virtual fields were consistently able to identify plastic model parameters and outperform the stiffness-based and manually defined virtual fields when the data was corrupted by noise.
NASA Astrophysics Data System (ADS)
Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Huth, N.; Marin, F.; Martiné, J.-F.
2014-01-01
Agro-Land Surface Models (agro-LSM) have been developed from the integration of specific crop processes into large-scale generic land surface models that allow calculating the spatial distribution and variability of energy, water and carbon fluxes within the soil-vegetation-atmosphere continuum. When developing agro-LSM models, a particular attention must be given to the effects of crop phenology and management on the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty of Agro-LSM models is related to their usually large number of parameters. In this study, we quantify the parameter-values uncertainty in the simulation of sugar cane biomass production with the agro-LSM ORCHIDEE-STICS, using a multi-regional approach with data from sites in Australia, La Réunion and Brazil. In ORCHIDEE-STICS, two models are chained: STICS, an agronomy model that calculates phenology and management, and ORCHIDEE, a land surface model that calculates biomass and other ecosystem variables forced by STICS' phenology. First, the parameters that dominate the uncertainty of simulated biomass at harvest date are determined through a screening of 67 different parameters of both STICS and ORCHIDEE on a multi-site basis. Secondly, the uncertainty of harvested biomass attributable to those most sensitive parameters is quantified and specifically attributed to either STICS (phenology, management) or to ORCHIDEE (other ecosystem variables including biomass) through distinct Monte-Carlo runs. The uncertainty on parameter values is constrained using observations by calibrating the model independently at seven sites. In a third step, a sensitivity analysis is carried out by varying the most sensitive parameters to investigate their effects at continental scale. A Monte-Carlo sampling method associated with the calculation of Partial Ranked Correlation Coefficients is used to quantify the sensitivity of harvested biomass to input parameters on a continental scale across the large regions of intensive sugar cane cultivation in Australia and Brazil. Ten parameters driving most of the uncertainty in the ORCHIDEE-STICS modeled biomass at the 7 sites are identified by the screening procedure. We found that the 10 most sensitive parameters control phenology (maximum rate of increase of LAI) and root uptake of water and nitrogen (root profile and root growth rate, nitrogen stress threshold) in STICS, and photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), and transpiration and respiration (stomatal conductance, growth and maintenance respiration coefficients) in ORCHIDEE. We find that the optimal carboxylation rate and photosynthesis temperature parameters contribute most to the uncertainty in harvested biomass simulations at site scale. The spatial variation of the ranked correlation between input parameters and modeled biomass at harvest is well explained by rain and temperature drivers, suggesting climate-mediated different sensitivities of modeled sugar cane yield to the model parameters, for Australia and Brazil. This study reveals the spatial and temporal patterns of uncertainty variability for a highly parameterized agro-LSM and calls for more systematic uncertainty analyses of such models.
NASA Astrophysics Data System (ADS)
Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Caubel, A.; Huth, N.; Marin, F.; Martiné, J.-F.
2014-06-01
Agro-land surface models (agro-LSM) have been developed from the integration of specific crop processes into large-scale generic land surface models that allow calculating the spatial distribution and variability of energy, water and carbon fluxes within the soil-vegetation-atmosphere continuum. When developing agro-LSM models, particular attention must be given to the effects of crop phenology and management on the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty of agro-LSM models is related to their usually large number of parameters. In this study, we quantify the parameter-values uncertainty in the simulation of sugarcane biomass production with the agro-LSM ORCHIDEE-STICS, using a multi-regional approach with data from sites in Australia, La Réunion and Brazil. In ORCHIDEE-STICS, two models are chained: STICS, an agronomy model that calculates phenology and management, and ORCHIDEE, a land surface model that calculates biomass and other ecosystem variables forced by STICS phenology. First, the parameters that dominate the uncertainty of simulated biomass at harvest date are determined through a screening of 67 different parameters of both STICS and ORCHIDEE on a multi-site basis. Secondly, the uncertainty of harvested biomass attributable to those most sensitive parameters is quantified and specifically attributed to either STICS (phenology, management) or to ORCHIDEE (other ecosystem variables including biomass) through distinct Monte Carlo runs. The uncertainty on parameter values is constrained using observations by calibrating the model independently at seven sites. In a third step, a sensitivity analysis is carried out by varying the most sensitive parameters to investigate their effects at continental scale. A Monte Carlo sampling method associated with the calculation of partial ranked correlation coefficients is used to quantify the sensitivity of harvested biomass to input parameters on a continental scale across the large regions of intensive sugarcane cultivation in Australia and Brazil. The ten parameters driving most of the uncertainty in the ORCHIDEE-STICS modeled biomass at the 7 sites are identified by the screening procedure. We found that the 10 most sensitive parameters control phenology (maximum rate of increase of LAI) and root uptake of water and nitrogen (root profile and root growth rate, nitrogen stress threshold) in STICS, and photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), and transpiration and respiration (stomatal conductance, growth and maintenance respiration coefficients) in ORCHIDEE. We find that the optimal carboxylation rate and photosynthesis temperature parameters contribute most to the uncertainty in harvested biomass simulations at site scale. The spatial variation of the ranked correlation between input parameters and modeled biomass at harvest is well explained by rain and temperature drivers, suggesting different climate-mediated sensitivities of modeled sugarcane yield to the model parameters, for Australia and Brazil. This study reveals the spatial and temporal patterns of uncertainty variability for a highly parameterized agro-LSM and calls for more systematic uncertainty analyses of such models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Huiying; Hou, Zhangshuan; Huang, Maoyi
The Community Land Model (CLM) represents physical, chemical, and biological processes of the terrestrial ecosystems that interact with climate across a range of spatial and temporal scales. As CLM includes numerous sub-models and associated parameters, the high-dimensional parameter space presents a formidable challenge for quantifying uncertainty and improving Earth system predictions needed to assess environmental changes and risks. This study aims to evaluate the potential of transferring hydrologic model parameters in CLM through sensitivity analyses and classification across watersheds from the Model Parameter Estimation Experiment (MOPEX) in the United States. The sensitivity of CLM-simulated water and energy fluxes to hydrologicalmore » parameters across 431 MOPEX basins are first examined using an efficient stochastic sampling-based sensitivity analysis approach. Linear, interaction, and high-order nonlinear impacts are all identified via statistical tests and stepwise backward removal parameter screening. The basins are then classified accordingly to their parameter sensitivity patterns (internal attributes), as well as their hydrologic indices/attributes (external hydrologic factors) separately, using a Principal component analyses (PCA) and expectation-maximization (EM) –based clustering approach. Similarities and differences among the parameter sensitivity-based classification system (S-Class), the hydrologic indices-based classification (H-Class), and the Koppen climate classification systems (K-Class) are discussed. Within each S-class with similar parameter sensitivity characteristics, similar inversion modeling setups can be used for parameter calibration, and the parameters and their contribution or significance to water and energy cycling may also be more transferrable. This classification study provides guidance on identifiable parameters, and on parameterization and inverse model design for CLM but the methodology is applicable to other models. Inverting parameters at representative sites belonging to the same class can significantly reduce parameter calibration efforts.« less
NASA Astrophysics Data System (ADS)
Badawy, B.; Fletcher, C. G.
2017-12-01
The parameterization of snow processes in land surface models is an important source of uncertainty in climate simulations. Quantifying the importance of snow-related parameters, and their uncertainties, may therefore lead to better understanding and quantification of uncertainty within integrated earth system models. However, quantifying the uncertainty arising from parameterized snow processes is challenging due to the high-dimensional parameter space, poor observational constraints, and parameter interaction. In this study, we investigate the sensitivity of the land simulation to uncertainty in snow microphysical parameters in the Canadian LAnd Surface Scheme (CLASS) using an uncertainty quantification (UQ) approach. A set of training cases (n=400) from CLASS is used to sample each parameter across its full range of empirical uncertainty, as determined from available observations and expert elicitation. A statistical learning model using support vector regression (SVR) is then constructed from the training data (CLASS output variables) to efficiently emulate the dynamical CLASS simulations over a much larger (n=220) set of cases. This approach is used to constrain the plausible range for each parameter using a skill score, and to identify the parameters with largest influence on the land simulation in CLASS at global and regional scales, using a random forest (RF) permutation importance algorithm. Preliminary sensitivity tests indicate that snow albedo refreshment threshold and the limiting snow depth, below which bare patches begin to appear, have the highest impact on snow output variables. The results also show a considerable reduction of the plausible ranges of the parameters values and hence reducing their uncertainty ranges, which can lead to a significant reduction of the model uncertainty. The implementation and results of this study will be presented and discussed in details.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Huiping; Qian, Yun; Zhao, Chun
2015-09-09
In this study, we adopt a parametric sensitivity analysis framework that integrates the quasi-Monte Carlo parameter sampling approach and a surrogate model to examine aerosol effects on the East Asian Monsoon climate simulated in the Community Atmosphere Model (CAM5). A total number of 256 CAM5 simulations are conducted to quantify the model responses to the uncertain parameters associated with cloud microphysics parameterizations and aerosol (e.g., sulfate, black carbon (BC), and dust) emission factors and their interactions. Results show that the interaction terms among parameters are important for quantifying the sensitivity of fields of interest, especially precipitation, to the parameters. Themore » relative importance of cloud-microphysics parameters and emission factors (strength) depends on evaluation metrics or the model fields we focused on, and the presence of uncertainty in cloud microphysics imposes an additional challenge in quantifying the impact of aerosols on cloud and climate. Due to their different optical and microphysical properties and spatial distributions, sulfate, BC, and dust aerosols have very different impacts on East Asian Monsoon through aerosol-cloud-radiation interactions. The climatic effects of aerosol do not always have a monotonic response to the change of emission factors. The spatial patterns of both sign and magnitude of aerosol-induced changes in radiative fluxes, cloud, and precipitation could be different, depending on the aerosol types, when parameters are sampled in different ranges of values. We also identify the different cloud microphysical parameters that show the most significant impact on climatic effect induced by sulfate, BC and dust, respectively, in East Asia.« less
NASA Astrophysics Data System (ADS)
da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho
2018-04-01
A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.
NASA Astrophysics Data System (ADS)
Song, X.; Chen, X.; Dai, H.; Hammond, G. E.; Song, H. S.; Stegen, J.
2016-12-01
The hyporheic zone is an active region for biogeochemical processes such as carbon and nitrogen cycling, where the groundwater and surface water mix and interact with each other with distinct biogeochemical and thermal properties. The biogeochemical dynamics within the hyporheic zone are driven by both river water and groundwater hydraulic dynamics, which are directly affected by climate change scenarios. Besides that, the hydraulic and thermal properties of local sediments and microbial and chemical processes also play important roles in biogeochemical dynamics. Thus for a comprehensive understanding of the biogeochemical processes in the hyporheic zone, a coupled thermo-hydro-biogeochemical model is needed. As multiple uncertainty sources are involved in the integrated model, it is important to identify its key modules/parameters through sensitivity analysis. In this study, we develop a 2D cross-section model in the hyporheic zone at the DOE Hanford site adjacent to Columbia River and use this model to quantify module and parametric sensitivity on assessment of climate change. To achieve this purpose, We 1) develop a facies-based groundwater flow and heat transfer model that incorporates facies geometry and heterogeneity characterized from a field data set, 2) derive multiple reaction networks/pathways from batch experiments with in-situ samples and integrate temperate dependent reactive transport modules to the flow model, 3) assign multiple climate change scenarios to the coupled model by analyzing historical river stage data, 4) apply a variance-based global sensitivity analysis to quantify scenario/module/parameter uncertainty in hierarchy level. The objectives of the research include: 1) identifing the key control factors of the coupled thermo-hydro-biogeochemical model in the assessment of climate change, and 2) quantify the carbon consumption in different climate change scenarios in the hyporheic zone.
Sensitivity analyses for sparse-data problems-using weakly informative bayesian priors.
Hamra, Ghassan B; MacLehose, Richard F; Cole, Stephen R
2013-03-01
Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist.
Sensitivity Analyses for Sparse-Data Problems—Using Weakly Informative Bayesian Priors
Hamra, Ghassan B.; MacLehose, Richard F.; Cole, Stephen R.
2013-01-01
Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist. PMID:23337241
NASA Astrophysics Data System (ADS)
Gu, Yueqing; Bourke, Vincent; Kim, Jae Gwan; Xia, Mengna; Constantinescu, Anca; Mason, Ralph P.; Liu, Hanli
2003-07-01
Three oxygen-sensitive parameters (arterial hemoglobin oxygen saturation SaO2, tumor vascular oxygenated hemoglobin concentration [HbO2], and tumor oxygen tension pO2) were measured simultaneously by three different optical techniques (pulse oximeter, near infrared spectroscopy, and FOXY) to evaluate dynamic responses of breast tumors to carbogen (5% CO2 and 95% O2) intervention. All three parameters displayed similar trends in dynamic response to carbogen challenge, but with different response times. These response times were quantified by the time constants of the exponential fitting curves, revealing the immediate and the fastest response from the arterial SaO2, followed by changes in global tumor vascular [HbO2], and delayed responses for pO2. The consistency of the three oxygen-sensitive parameters demonstrated the ability of NIRS to monitor therapeutic interventions for rat breast tumors in-vivo in real time.
Quantifying the sensitivity of post-glacial sea level change to laterally varying viscosity
NASA Astrophysics Data System (ADS)
Crawford, Ophelia; Al-Attar, David; Tromp, Jeroen; Mitrovica, Jerry X.; Austermann, Jacqueline; Lau, Harriet C. P.
2018-05-01
We present a method for calculating the derivatives of measurements of glacial isostatic adjustment (GIA) with respect to the viscosity structure of the Earth and the ice sheet history. These derivatives, or kernels, quantify the linearised sensitivity of measurements to the underlying model parameters. The adjoint method is used to enable efficient calculation of theoretically exact sensitivity kernels within laterally heterogeneous earth models that can have a range of linear or non-linear viscoelastic rheologies. We first present a new approach to calculate GIA in the time domain, which, in contrast to the more usual formulation in the Laplace domain, is well suited to continuously varying earth models and to the use of the adjoint method. Benchmarking results show excellent agreement between our formulation and previous methods. We illustrate the potential applications of the kernels calculated in this way through a range of numerical calculations relative to a spherically symmetric background model. The complex spatial patterns of the sensitivities are not intuitive, and this is the first time that such effects are quantified in an efficient and accurate manner.
Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.
2000-01-01
This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity, horizontal anisotropy, vertical hydraulic conductivity or vertical anisotropy, specific storage, and specific yield; and, for implicitly represented layers, vertical hydraulic conductivity. In addition, parameters can be defined to calculate the hydraulic conductance of the River, General-Head Boundary, and Drain Packages; areal recharge rates of the Recharge Package; maximum evapotranspiration of the Evapotranspiration Package; pumpage or the rate of flow at defined-flux boundaries of the Well Package; and the hydraulic head at constant-head boundaries. The spatial variation of model inputs produced using defined parameters is very flexible, including interpolated distributions that require the summation of contributions from different parameters. Observations can include measured hydraulic heads or temporal changes in hydraulic heads, measured gains and losses along head-dependent boundaries (such as streams), flows through constant-head boundaries, and advective transport through the system, which generally would be inferred from measured concentrations. MODFLOW-2000 is intended for use on any computer operating system. The program consists of algorithms programmed in Fortran 90, which efficiently performs numerical calculations and is fully compatible with the newer Fortran 95. The code is easily modified to be compatible with FORTRAN 77. Coordination for multiple processors is accommodated using Message Passing Interface (MPI) commands. The program is designed in a modular fashion that is intended to support inclusion of new capabilities.
NASA Astrophysics Data System (ADS)
Fremier, A. K.; Estrada Carmona, N.; Harper, E.; DeClerck, F.
2011-12-01
Appropriate application of complex models to estimate system behavior requires understanding the influence of model structure and parameter estimates on model output. To date, most researchers perform local sensitivity analyses, rather than global, because of computational time and quantity of data produced. Local sensitivity analyses are limited in quantifying the higher order interactions among parameters, which could lead to incomplete analysis of model behavior. To address this concern, we performed a GSA on a commonly applied equation for soil loss - the Revised Universal Soil Loss Equation. USLE is an empirical model built on plot-scale data from the USA and the Revised version (RUSLE) includes improved equations for wider conditions, with 25 parameters grouped into six factors to estimate long-term plot and watershed scale soil loss. Despite RUSLE's widespread application, a complete sensitivity analysis has yet to be performed. In this research, we applied a GSA to plot and watershed scale data from the US and Costa Rica to parameterize the RUSLE in an effort to understand the relative importance of model factors and parameters across wide environmental space. We analyzed the GSA results using Random Forest, a statistical approach to evaluate parameter importance accounting for the higher order interactions, and used Classification and Regression Trees to show the dominant trends in complex interactions. In all GSA calculations the management of cover crops (C factor) ranks the highest among factors (compared to rain-runoff erosivity, topography, support practices, and soil erodibility). This is counter to previous sensitivity analyses where the topographic factor was determined to be the most important. The GSA finding is consistent across multiple model runs, including data from the US, Costa Rica, and a synthetic dataset of the widest theoretical space. The three most important parameters were: Mass density of live and dead roots found in the upper inch of soil (C factor), slope angle (L and S factor), and percentage of land area covered by surface cover (C factor). Our findings give further support to the importance of vegetation as a vital ecosystem service provider - soil loss reduction. Concurrent, progress is already been made in Costa Rica, where dam managers are moving forward on a Payment for Ecosystem Services scheme to help keep private lands forested and to improve crop management through targeted investments. Use of complex watershed models, such as RUSLE can help managers quantify the effect of specific land use changes. Moreover, effective land management of vegetation has other important benefits, such as bundled ecosystem services (e.g. pollination, habitat connectivity, etc) and improvements of communities' livelihoods.
Janisse, Kevyn; Doucet, Stéphanie M.
2017-01-01
Perceptual models of animal vision have greatly contributed to our understanding of animal-animal and plant-animal communication. The receptor-noise model of color contrasts has been central to this research as it quantifies the difference between two colors for any visual system of interest. However, if the properties of the visual system are unknown, assumptions regarding parameter values must be made, generally with unknown consequences. In this study, we conduct a sensitivity analysis of the receptor-noise model using avian visual system parameters to systematically investigate the influence of variation in light environment, photoreceptor sensitivities, photoreceptor densities, and light transmission properties of the ocular media and the oil droplets. We calculated the chromatic contrast of 15 plumage patches to quantify a dichromatism score for 70 species of Galliformes, a group of birds that display a wide range of sexual dimorphism. We found that the photoreceptor densities and the wavelength of maximum sensitivity of the short-wavelength-sensitive photoreceptor 1 (SWS1) can change dichromatism scores by 50% to 100%. In contrast, the light environment, transmission properties of the oil droplets, transmission properties of the ocular media, and the peak sensitivities of the cone photoreceptors had a smaller impact on the scores. By investigating the effect of varying two or more parameters simultaneously, we further demonstrate that improper parameterization could lead to differences between calculated and actual contrasts of more than 650%. Our findings demonstrate that improper parameterization of tetrachromatic visual models can have very large effects on measures of dichromatism scores, potentially leading to erroneous inferences. We urge more complete characterization of avian retinal properties and recommend that researchers either determine whether their species of interest possess an ultraviolet or near-ultraviolet sensitive SWS1 photoreceptor, or present models for both. PMID:28076391
Mutel, Christopher L; de Baan, Laura; Hellweg, Stefanie
2013-06-04
Comprehensive sensitivity analysis is a significant tool to interpret and improve life cycle assessment (LCA) models, but is rarely performed. Sensitivity analysis will increase in importance as inventory databases become regionalized, increasing the number of system parameters, and parametrized, adding complexity through variables and nonlinear formulas. We propose and implement a new two-step approach to sensitivity analysis. First, we identify parameters with high global sensitivities for further examination and analysis with a screening step, the method of elementary effects. Second, the more computationally intensive contribution to variance test is used to quantify the relative importance of these parameters. The two-step sensitivity test is illustrated on a regionalized, nonlinear case study of the biodiversity impacts from land use of cocoa production, including a worldwide cocoa products trade model. Our simplified trade model can be used for transformable commodities where one is assessing market shares that vary over time. In the case study, the highly uncertain characterization factors for the Ivory Coast and Ghana contributed more than 50% of variance for almost all countries and years examined. The two-step sensitivity test allows for the interpretation, understanding, and improvement of large, complex, and nonlinear LCA systems.
Complex multifractal nature in Mycobacterium tuberculosis genome
Mandal, Saurav; Roychowdhury, Tanmoy; Chirom, Keilash; Bhattacharya, Alok; Brojen Singh, R. K.
2017-01-01
The mutifractal and long range correlation (C(r)) properties of strings, such as nucleotide sequence can be a useful parameter for identification of underlying patterns and variations. In this study C(r) and multifractal singularity function f(α) have been used to study variations in the genomes of a pathogenic bacteria Mycobacterium tuberculosis. Genomic sequences of M. tuberculosis isolates displayed significant variations in C(r) and f(α) reflecting inherent differences in sequences among isolates. M. tuberculosis isolates can be categorised into different subgroups based on sensitivity to drugs, these are DS (drug sensitive isolates), MDR (multi-drug resistant isolates) and XDR (extremely drug resistant isolates). C(r) follows significantly different scaling rules in different subgroups of isolates, but all the isolates follow one parameter scaling law. The richness in complexity of each subgroup can be quantified by the measures of multifractal parameters displaying a pattern in which XDR isolates have highest value and lowest for drug sensitive isolates. Therefore C(r) and multifractal functions can be useful parameters for analysis of genomic sequences. PMID:28440326
Complex multifractal nature in Mycobacterium tuberculosis genome
NASA Astrophysics Data System (ADS)
Mandal, Saurav; Roychowdhury, Tanmoy; Chirom, Keilash; Bhattacharya, Alok; Brojen Singh, R. K.
2017-04-01
The mutifractal and long range correlation (C(r)) properties of strings, such as nucleotide sequence can be a useful parameter for identification of underlying patterns and variations. In this study C(r) and multifractal singularity function f(α) have been used to study variations in the genomes of a pathogenic bacteria Mycobacterium tuberculosis. Genomic sequences of M. tuberculosis isolates displayed significant variations in C(r) and f(α) reflecting inherent differences in sequences among isolates. M. tuberculosis isolates can be categorised into different subgroups based on sensitivity to drugs, these are DS (drug sensitive isolates), MDR (multi-drug resistant isolates) and XDR (extremely drug resistant isolates). C(r) follows significantly different scaling rules in different subgroups of isolates, but all the isolates follow one parameter scaling law. The richness in complexity of each subgroup can be quantified by the measures of multifractal parameters displaying a pattern in which XDR isolates have highest value and lowest for drug sensitive isolates. Therefore C(r) and multifractal functions can be useful parameters for analysis of genomic sequences.
Quantifying Groundwater Model Uncertainty
NASA Astrophysics Data System (ADS)
Hill, M. C.; Poeter, E.; Foglia, L.
2007-12-01
Groundwater models are characterized by the (a) processes simulated, (b) boundary conditions, (c) initial conditions, (d) method of solving the equation, (e) parameterization, and (f) parameter values. Models are related to the system of concern using data, some of which form the basis of observations used most directly, through objective functions, to estimate parameter values. Here we consider situations in which parameter values are determined by minimizing an objective function. Other methods of model development are not considered because their ad hoc nature generally prohibits clear quantification of uncertainty. Quantifying prediction uncertainty ideally includes contributions from (a) to (f). The parameter values of (f) tend to be continuous with respect to both the simulated equivalents of the observations and the predictions, while many aspects of (a) through (e) are discrete. This fundamental difference means that there are options for evaluating the uncertainty related to parameter values that generally do not exist for other aspects of a model. While the methods available for (a) to (e) can be used for the parameter values (f), the inferential methods uniquely available for (f) generally are less computationally intensive and often can be used to considerable advantage. However, inferential approaches require calculation of sensitivities. Whether the numerical accuracy and stability of the model solution required for accurate sensitivities is more broadly important to other model uses is an issue that needs to be addressed. Alternative global methods can require 100 or even 1,000 times the number of runs needed by inferential methods, though methods of reducing the number of needed runs are being developed and tested. Here we present three approaches for quantifying model uncertainty and investigate their strengths and weaknesses. (1) Represent more aspects as parameters so that the computationally efficient methods can be broadly applied. This approach is attainable through universal model analysis software such as UCODE-2005, PEST, and joint use of these programs, which allow many aspects of a model to be defined as parameters. (2) Use highly parameterized models to quantify aspects of (e). While promising, this approach implicitly includes parameterizations that may be considered unreasonable if investigated explicitly, so that resulting measures of uncertainty may be too large. (3) Use a combination of inferential and global methods that can be facilitated using the new software MMA (Multi-Model Analysis), which is constructed using the JUPITER API. Here we consider issues related to the model discrimination criteria calculated by MMA.
Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao
2015-01-01
There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants. Copyright © 2014 Elsevier Ltd. All rights reserved.
Promising New Photon Detection Concepts for High-Resolution Clinical and Preclinical PET
Levin, Craig S.
2013-01-01
The ability of PET to visualize and quantify regions of low concentration of PET tracer representing subtle cellular and molecular signatures of disease depends on relatively complex biochemical, biologic, and physiologic factors that are challenging to control, as well as on instrumentation performance parameters that are, in principle, still possible to improve on. Thus, advances to the latter can somewhat offset barriers of the former. PET system performance parameters such as spatial resolution, contrast resolution, and photon sensitivity contribute significantly to PET’s ability to visualize and quantify lower concentrations of signal in the presence of background. In this report we present some technology innovations under investigation toward improving these PET system performance parameters. We focus particularly on a promising advance known as 3-dimensional position-sensitive detectors, which are detectors capable of distinguishing and measuring the position, energy, and arrival time of individual interactions of multi-interaction photon events in 3 dimensions. If successful, these new strategies enable enhancements such as the detection of fewer diseased cells in tissue or the ability to characterize lower-abundance molecular targets within cells. Translating these advanced capabilities to the clinic might allow expansion of PET’s roles in disease management, perhaps to earlier stages of disease. In preclinical research, such enhancements enable more sensitive and accurate studies of disease biology in living subjects. PMID:22302960
General methods for sensitivity analysis of equilibrium dynamics in patch occupancy models
Miller, David A.W.
2012-01-01
Sensitivity analysis is a useful tool for the study of ecological models that has many potential applications for patch occupancy modeling. Drawing from the rich foundation of existing methods for Markov chain models, I demonstrate new methods for sensitivity analysis of the equilibrium state dynamics of occupancy models. Estimates from three previous studies are used to illustrate the utility of the sensitivity calculations: a joint occupancy model for a prey species, its predators, and habitat used by both; occurrence dynamics from a well-known metapopulation study of three butterfly species; and Golden Eagle occupancy and reproductive dynamics. I show how to deal efficiently with multistate models and how to calculate sensitivities involving derived state variables and lower-level parameters. In addition, I extend methods to incorporate environmental variation by allowing for spatial and temporal variability in transition probabilities. The approach used here is concise and general and can fully account for environmental variability in transition parameters. The methods can be used to improve inferences in occupancy studies by quantifying the effects of underlying parameters, aiding prediction of future system states, and identifying priorities for sampling effort.
Detecting influential observations in nonlinear regression modeling of groundwater flow
Yager, Richard M.
1998-01-01
Nonlinear regression is used to estimate optimal parameter values in models of groundwater flow to ensure that differences between predicted and observed heads and flows do not result from nonoptimal parameter values. Parameter estimates can be affected, however, by observations that disproportionately influence the regression, such as outliers that exert undue leverage on the objective function. Certain statistics developed for linear regression can be used to detect influential observations in nonlinear regression if the models are approximately linear. This paper discusses the application of Cook's D, which measures the effect of omitting a single observation on a set of estimated parameter values, and the statistical parameter DFBETAS, which quantifies the influence of an observation on each parameter. The influence statistics were used to (1) identify the influential observations in the calibration of a three-dimensional, groundwater flow model of a fractured-rock aquifer through nonlinear regression, and (2) quantify the effect of omitting influential observations on the set of estimated parameter values. Comparison of the spatial distribution of Cook's D with plots of model sensitivity shows that influential observations correspond to areas where the model heads are most sensitive to certain parameters, and where predicted groundwater flow rates are largest. Five of the six discharge observations were identified as influential, indicating that reliable measurements of groundwater flow rates are valuable data in model calibration. DFBETAS are computed and examined for an alternative model of the aquifer system to identify a parameterization error in the model design that resulted in overestimation of the effect of anisotropy on horizontal hydraulic conductivity.
NASA Astrophysics Data System (ADS)
Gan, Yanjun; Liang, Xin-Zhong; Duan, Qingyun; Choi, Hyun Il; Dai, Yongjiu; Wu, Huan
2015-06-01
An uncertainty quantification framework was employed to examine the sensitivities of 24 model parameters from a newly developed Conjunctive Surface-Subsurface Process (CSSP) land surface model (LSM). The sensitivity analysis (SA) was performed over 18 representative watersheds in the contiguous United States to examine the influence of model parameters in the simulation of terrestrial hydrological processes. Two normalized metrics, relative bias (RB) and Nash-Sutcliffe efficiency (NSE), were adopted to assess the fit between simulated and observed streamflow discharge (SD) and evapotranspiration (ET) for a 14 year period. SA was conducted using a multiobjective two-stage approach, in which the first stage was a qualitative SA using the Latin Hypercube-based One-At-a-Time (LH-OAT) screening, and the second stage was a quantitative SA using the Multivariate Adaptive Regression Splines (MARS)-based Sobol' sensitivity indices. This approach combines the merits of qualitative and quantitative global SA methods, and is effective and efficient for understanding and simplifying large, complex system models. Ten of the 24 parameters were identified as important across different watersheds. The contribution of each parameter to the total response variance was then quantified by Sobol' sensitivity indices. Generally, parameter interactions contribute the most to the response variance of the CSSP, and only 5 out of 24 parameters dominate model behavior. Four photosynthetic and respiratory parameters are shown to be influential to ET, whereas reference depth for saturated hydraulic conductivity is the most influential parameter for SD in most watersheds. Parameter sensitivity patterns mainly depend on hydroclimatic regime, as well as vegetation type and soil texture. This article was corrected on 26 JUN 2015. See the end of the full text for details.
Sherwood, Carly A; Eastham, Ashley; Lee, Lik Wee; Risler, Jenni; Mirzaei, Hamid; Falkner, Jayson A; Martin, Daniel B
2009-07-01
Multiple reaction monitoring (MRM) is a highly sensitive method of targeted mass spectrometry (MS) that can be used to selectively detect and quantify peptides based on the screening of specified precursor peptide-to-fragment ion transitions. MRM-MS sensitivity depends critically on the tuning of instrument parameters, such as collision energy and cone voltage, for the generation of maximal product ion signal. Although generalized equations and values exist for such instrument parameters, there is no clear indication that optimal signal can be reliably produced for all types of MRM transitions using such an algorithmic approach. To address this issue, we have devised a workflow functional on both Waters Quattro Premier and ABI 4000 QTRAP triple quadrupole instruments that allows rapid determination of the optimal value of any programmable instrument parameter for each MRM transition. Here, we demonstrate the strategy for the optimizations of collision energy and cone voltage, but the method could be applied to other instrument parameters, such as declustering potential, as well. The workflow makes use of the incremental adjustment of the precursor and product m/z values at the hundredth decimal place to create a series of MRM targets at different collision energies that can be cycled through in rapid succession within a single run, avoiding any run-to-run variability in execution or comparison. Results are easily visualized and quantified using the MRM software package Mr. M to determine the optimal instrument parameters for each transition.
Sherwood, Carly A.; Eastham, Ashley; Lee, Lik Wee; Risler, Jenni; Mirzaei, Hamid; Falkner, Jayson A.; Martin, Daniel B.
2009-01-01
Multiple reaction monitoring (MRM) is a highly sensitive method of targeted mass spectrometry (MS) that can be used to selectively detect and quantify peptides based on the screening of specified precursor peptide-to-fragment ion transitions. MRM-MS sensitivity depends critically on the tuning of instrument parameters, such as collision energy and cone voltage, for the generation of maximal product ion signal. Although generalized equations and values exist for such instrument parameters, there is no clear indication that optimal signal can be reliably produced for all types of MRM transitions using such an algorithmic approach. To address this issue, we have devised a workflow functional on both Waters Quattro Premier and ABI 4000 QTRAP triple quadrupole instruments that allows rapid determination of the optimal value of any programmable instrument parameter for each MRM transition. Here, we demonstrate the strategy for the optimizations of collision energy and cone voltage, but the method could be applied to other instrument parameters, such as declustering potential, as well. The workflow makes use of the incremental adjustment of the precursor and product m/z values at the hundredth decimal place to create a series of MRM targets at different collision energies that can be cycled through in rapid succession within a single run, avoiding any run-to-run variability in execution or comparison. Results are easily visualized and quantified using the MRM software package Mr. M to determine the optimal instrument parameters for each transition. PMID:19405522
Wu, Yiping; Liu, Shuguang; Huang, Zhihong; Yan, Wende
2014-01-01
Ecosystem models are useful tools for understanding ecological processes and for sustainable management of resources. In biogeochemical field, numerical models have been widely used for investigating carbon dynamics under global changes from site to regional and global scales. However, it is still challenging to optimize parameters and estimate parameterization uncertainty for complex process-based models such as the Erosion Deposition Carbon Model (EDCM), a modified version of CENTURY, that consider carbon, water, and nutrient cycles of ecosystems. This study was designed to conduct the parameter identifiability, optimization, sensitivity, and uncertainty analysis of EDCM using our developed EDCM-Auto, which incorporated a comprehensive R package—Flexible Modeling Framework (FME) and the Shuffled Complex Evolution (SCE) algorithm. Using a forest flux tower site as a case study, we implemented a comprehensive modeling analysis involving nine parameters and four target variables (carbon and water fluxes) with their corresponding measurements based on the eddy covariance technique. The local sensitivity analysis shows that the plant production-related parameters (e.g., PPDF1 and PRDX) are most sensitive to the model cost function. Both SCE and FME are comparable and performed well in deriving the optimal parameter set with satisfactory simulations of target variables. Global sensitivity and uncertainty analysis indicate that the parameter uncertainty and the resulting output uncertainty can be quantified, and that the magnitude of parameter-uncertainty effects depends on variables and seasons. This study also demonstrates that using the cutting-edge R functions such as FME can be feasible and attractive for conducting comprehensive parameter analysis for ecosystem modeling.
Naujokaitis-Lewis, Ilona; Curtis, Janelle M R
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along with demographic parameters in sensitivity routines. GRIP 2.0 is an important decision-support tool that can be used to prioritize research, identify habitat-based thresholds and management intervention points to improve probability of species persistence, and evaluate trade-offs of alternative management options.
Curtis, Janelle M.R.
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along with demographic parameters in sensitivity routines. GRIP 2.0 is an important decision-support tool that can be used to prioritize research, identify habitat-based thresholds and management intervention points to improve probability of species persistence, and evaluate trade-offs of alternative management options. PMID:27547529
Wittmann, C; Heinzle, E
2001-04-01
Experimental design of (13)C-tracer studies for metabolic flux analysis with mass spectrometric determination of labeling patterns was performed for the central metabolism of Corynebacterium glutamicum comprising various flux scenarios. Ratio measurement of mass isotopomer pools of Corynebacterium products lysine, alanine, and trehalose is sufficient to quantify the flux partitioning ratios (i) between glycolysis and pentose phosphate pathways (Phi(PPP)), (ii) between the split pathways in the lysine biosynthesis (Phi(DH)), (iii) at the pyruvate node (Phi(PC)), and reversibilities of (iv) glucose 6-phosphate isomerase (zeta(PGI)), (v) at the pyruvate node (zeta(PC/PEPCK)), and (vi) of transaldolase and transketolases in the PPP. Weighted sensitivities for flux parameters were derived from partial derivatives to quantitatively evaluate experimental approaches and predict precision for estimated flux parameters. Deviation of intensity ratios from ideal values of 1 was used as weighting function. Weighted flux sensitivities can be used to identify optimal type and degree of tracer labeling or potential intensity ratios to be measured. Experimental design for lysine-producing strain C. glutamicum MH 20-22B (Marx et al., Biotechnol. Bioeng. 49, 111-129, 1996) and various potential mutants with different alterations in the flux pattern showed that specific tracer labelings are optimal to quantify a certain flux parameter uninfluenced by the overall flux situation. Identified substrates of choice are [1-(13)C]glucose for the estimation of Phi(PPP) and zeta(PGI) and a 1 : 1 mixture of [U-(12)C/U-(13)C]glucose for the determination of zeta(PC/PEPCK). Phi(PC) can be quantified by feeding [4-(13)C]glucose or [U-(12)C/U-(13)C]glucose (1 : 1), whereas Phi(DH) is accessible via [4-(13)C]glucose. The sensitivity for the quantification of a certain flux parameter can be influenced by superposition through other flux parameters in the network, but substrate and measured mass isotopomers of choice remain the same. In special cases, reduced labeling degree of the tracer substrate can increase the precision of flux analysis. Enhanced precision and flux information can be achieved via multiply labeled substrates. The presented approach can be applied for effective experimental design of (13)C tracer studies for metabolic flux analysis. Intensity ratios of other products such as glutamate, valine, phenylalanine, and riboflavin also sensitively reflect flux parameters, which underlines the great potential of mass spectrometry for flux analysis. Copyright 2001 Academic Press.
Computational Modelling and Optimal Control of Ebola Virus Disease with non-Linear Incidence Rate
NASA Astrophysics Data System (ADS)
Takaidza, I.; Makinde, O. D.; Okosun, O. K.
2017-03-01
The 2014 Ebola outbreak in West Africa has exposed the need to connect modellers and those with relevant data as pivotal to better understanding of how the disease spreads and quantifying the effects of possible interventions. In this paper, we model and analyse the Ebola virus disease with non-linear incidence rate. The epidemic model created is used to describe how the Ebola virus could potentially evolve in a population. We perform an uncertainty analysis of the basic reproductive number R 0 to quantify its sensitivity to other disease-related parameters. We also analyse the sensitivity of the final epidemic size to the time control interventions (education, vaccination, quarantine and safe handling) and provide the cost effective combination of the interventions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Maoyi; Hou, Zhangshuan; Leung, Lai-Yung R.
2013-12-01
With the emergence of earth system models as important tools for understanding and predicting climate change and implications to mitigation and adaptation, it has become increasingly important to assess the fidelity of the land component within earth system models to capture realistic hydrological processes and their response to the changing climate and quantify the associated uncertainties. This study investigates the sensitivity of runoff simulations to major hydrologic parameters in version 4 of the Community Land Model (CLM4) by integrating CLM4 with a stochastic exploratory sensitivity analysis framework at 20 selected watersheds from the Model Parameter Estimation Experiment (MOPEX) spanning amore » wide range of climate and site conditions. We found that for runoff simulations, the most significant parameters are those related to the subsurface runoff parameterizations. Soil texture related parameters and surface runoff parameters are of secondary significance. Moreover, climate and soil conditions play important roles in the parameter sensitivity. In general, site conditions within water-limited hydrologic regimes and with finer soil texture result in stronger sensitivity of output variables, such as runoff and its surface and subsurface components, to the input parameters in CLM4. This study demonstrated the feasibility of parameter inversion for CLM4 using streamflow observations to improve runoff simulations. By ranking the significance of the input parameters, we showed that the parameter set dimensionality could be reduced for CLM4 parameter calibration under different hydrologic and climatic regimes so that the inverse problem is less ill posed.« less
Spatial trends in Pearson Type III statistical parameters
Lichty, R.W.; Karlinger, M.R.
1995-01-01
Spatial trends in the statistical parameters (mean, standard deviation, and skewness coefficient) of a Pearson Type III distribution of the logarithms of annual flood peaks for small rural basins (less than 90 km2) are delineated using a climate factor CT, (T=2-, 25-, and 100-yr recurrence intervals), which quantifies the effects of long-term climatic data (rainfall and pan evaporation) on observed T-yr floods. Maps showing trends in average parameter values demonstrate the geographically varying influence of climate on the magnitude of Pearson Type III statistical parameters. The spatial trends in variability of the parameter values characterize the sensitivity of statistical parameters to the interaction of basin-runoff characteristics (hydrology) and climate. -from Authors
NASA Astrophysics Data System (ADS)
Lundquist, K. A.; Jensen, D. D.; Lucas, D. D.
2017-12-01
Atmospheric source reconstruction allows for the probabilistic estimate of source characteristics of an atmospheric release using observations of the release. Performance of the inversion depends partially on the temporal frequency and spatial scale of the observations. The objective of this study is to quantify the sensitivity of the source reconstruction method to sparse spatial and temporal observations. To this end, simulations of atmospheric transport of noble gasses are created for the 2006 nuclear test at the Punggye-ri nuclear test site. Synthetic observations are collected from the simulation, and are taken as "ground truth". Data denial techniques are used to progressively coarsen the temporal and spatial resolution of the synthetic observations, while the source reconstruction model seeks to recover the true input parameters from the synthetic observations. Reconstructed parameters considered here are source location, source timing and source quantity. Reconstruction is achieved by running an ensemble of thousands of dispersion model runs that sample from a uniform distribution of the input parameters. Machine learning is used to train a computationally-efficient surrogate model from the ensemble simulations. Monte Carlo sampling and Bayesian inversion are then used in conjunction with the surrogate model to quantify the posterior probability density functions of source input parameters. This research seeks to inform decision makers of the tradeoffs between more expensive, high frequency observations and less expensive, low frequency observations.
NASA Astrophysics Data System (ADS)
Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.
2017-12-01
The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40-85% reduction in 1-NSE, and 35-90% reduction in |RB|. Overall, this uncertainty quantification framework is robust, effective and efficient for parametric uncertainty analysis, the results of which provide useful information that helps to understand the model behaviors and improve the model simulations.
Analysis of automated quantification of motor activity in REM sleep behaviour disorder.
Frandsen, Rune; Nikolic, Miki; Zoetmulder, Marielle; Kempfner, Lykke; Jennum, Poul
2015-10-01
Rapid eye movement (REM) sleep behaviour disorder (RBD) is characterized by dream enactment and REM sleep without atonia. Atonia is evaluated on the basis of visual criteria, but there is a need for more objective, quantitative measurements. We aimed to define and optimize a method for establishing baseline and all other parameters in automatic quantifying submental motor activity during REM sleep. We analysed the electromyographic activity of the submental muscle in polysomnographs of 29 patients with idiopathic RBD (iRBD), 29 controls and 43 Parkinson's (PD) patients. Six adjustable parameters for motor activity were defined. Motor activity was detected and quantified automatically. The optimal parameters for separating RBD patients from controls were investigated by identifying the greatest area under the receiver operating curve from a total of 648 possible combinations. The optimal parameters were validated on PD patients. Automatic baseline estimation improved characterization of atonia during REM sleep, as it eliminates inter/intra-observer variability and can be standardized across diagnostic centres. We found an optimized method for quantifying motor activity during REM sleep. The method was stable and can be used to differentiate RBD from controls and to quantify motor activity during REM sleep in patients with neurodegeneration. No control had more than 30% of REM sleep with increased motor activity; patients with known RBD had as low activity as 4.5%. We developed and applied a sensitive, quantitative, automatic algorithm to evaluate loss of atonia in RBD patients. © 2015 European Sleep Research Society.
Uncertainty analysis in geospatial merit matrix–based hydropower resource assessment
Pasha, M. Fayzul K.; Yeasmin, Dilruba; Saetern, Sen; ...
2016-03-30
Hydraulic head and mean annual streamflow, two main input parameters in hydropower resource assessment, are not measured at every point along the stream. Translation and interpolation are used to derive these parameters, resulting in uncertainties. This study estimates the uncertainties and their effects on model output parameters: the total potential power and the number of potential locations (stream-reach). These parameters are quantified through Monte Carlo Simulation (MCS) linking with a geospatial merit matrix based hydropower resource assessment (GMM-HRA) Model. The methodology is applied to flat, mild, and steep terrains. Results show that the uncertainty associated with the hydraulic head ismore » within 20% for mild and steep terrains, and the uncertainty associated with streamflow is around 16% for all three terrains. Output uncertainty increases as input uncertainty increases. However, output uncertainty is around 10% to 20% of the input uncertainty, demonstrating the robustness of the GMM-HRA model. Hydraulic head is more sensitive to output parameters in steep terrain than in flat and mild terrains. Furthermore, mean annual streamflow is more sensitive to output parameters in flat terrain.« less
Variance-based interaction index measuring heteroscedasticity
NASA Astrophysics Data System (ADS)
Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom
2016-06-01
This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.
First and Higher Order Effects on Zero Order Radiative Transfer Model
NASA Astrophysics Data System (ADS)
Neelam, M.; Mohanty, B.
2014-12-01
Microwave radiative transfer model are valuable tool in understanding the complex land surface interactions. Past literature has largely focused on local sensitivity analysis for factor priotization and ignoring the interactions between the variables and uncertainties around them. Since land surface interactions are largely nonlinear, there always exist uncertainties, heterogeneities and interactions thus it is important to quantify them to draw accurate conclusions. In this effort, we used global sensitivity analysis to address the issues of variable uncertainty, higher order interactions, factor priotization and factor fixing for zero-order radiative transfer (ZRT) model. With the to-be-launched Soil Moisture Active Passive (SMAP) mission of NASA, it is very important to have a complete understanding of ZRT for soil moisture retrieval to direct future research and cal/val field campaigns. This is a first attempt to use GSA technique to quantify first order and higher order effects on brightness temperature from ZRT model. Our analyses reflect conditions observed during the growing agricultural season for corn and soybeans in two different regions in - Iowa, U.S.A and Winnipeg, Canada. We found that for corn fields in Iowa, there exist significant second order interactions between soil moisture, surface roughness parameters (RMS height and correlation length) and vegetation parameters (vegetation water content, structure and scattering albedo), whereas in Winnipeg, second order interactions are mainly due to soil moisture and vegetation parameters. But for soybean fields in both Iowa and Winnipeg, we found significant interactions only to exist between soil moisture and surface roughness parameters.
Yang, Ben; Qian, Yun; Berg, Larry K.; ...
2016-07-21
We evaluate the sensitivity of simulated turbine-height wind speeds to 26 parameters within the Mellor–Yamada–Nakanishi–Niino (MYNN) planetary boundary-layer scheme and MM5 surface-layer scheme of the Weather Research and Forecasting model over an area of complex terrain. An efficient sampling algorithm and generalized linear model are used to explore the multiple-dimensional parameter space and quantify the parametric sensitivity of simulated turbine-height wind speeds. The results indicate that most of the variability in the ensemble simulations is due to parameters related to the dissipation of turbulent kinetic energy (TKE), Prandtl number, turbulent length scales, surface roughness, and the von Kármán constant. Themore » parameter associated with the TKE dissipation rate is found to be most important, and a larger dissipation rate produces larger hub-height wind speeds. A larger Prandtl number results in smaller nighttime wind speeds. Increasing surface roughness reduces the frequencies of both extremely weak and strong airflows, implying a reduction in the variability of wind speed. All of the above parameters significantly affect the vertical profiles of wind speed and the magnitude of wind shear. Lastly, the relative contributions of individual parameters are found to be dependent on both the terrain slope and atmospheric stability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Ben; Qian, Yun; Berg, Larry K.
We evaluate the sensitivity of simulated turbine-height wind speeds to 26 parameters within the Mellor–Yamada–Nakanishi–Niino (MYNN) planetary boundary-layer scheme and MM5 surface-layer scheme of the Weather Research and Forecasting model over an area of complex terrain. An efficient sampling algorithm and generalized linear model are used to explore the multiple-dimensional parameter space and quantify the parametric sensitivity of simulated turbine-height wind speeds. The results indicate that most of the variability in the ensemble simulations is due to parameters related to the dissipation of turbulent kinetic energy (TKE), Prandtl number, turbulent length scales, surface roughness, and the von Kármán constant. Themore » parameter associated with the TKE dissipation rate is found to be most important, and a larger dissipation rate produces larger hub-height wind speeds. A larger Prandtl number results in smaller nighttime wind speeds. Increasing surface roughness reduces the frequencies of both extremely weak and strong airflows, implying a reduction in the variability of wind speed. All of the above parameters significantly affect the vertical profiles of wind speed and the magnitude of wind shear. Lastly, the relative contributions of individual parameters are found to be dependent on both the terrain slope and atmospheric stability.« less
Sensitivity Analysis of the Bone Fracture Risk Model
NASA Technical Reports Server (NTRS)
Lewandowski, Beth; Myers, Jerry; Sibonga, Jean Diane
2017-01-01
Introduction: The probability of bone fracture during and after spaceflight is quantified to aid in mission planning, to determine required astronaut fitness standards and training requirements and to inform countermeasure research and design. Probability is quantified with a probabilistic modeling approach where distributions of model parameter values, instead of single deterministic values, capture the parameter variability within the astronaut population and fracture predictions are probability distributions with a mean value and an associated uncertainty. Because of this uncertainty, the model in its current state cannot discern an effect of countermeasures on fracture probability, for example between use and non-use of bisphosphonates or between spaceflight exercise performed with the Advanced Resistive Exercise Device (ARED) or on devices prior to installation of ARED on the International Space Station. This is thought to be due to the inability to measure key contributors to bone strength, for example, geometry and volumetric distributions of bone mass, with areal bone mineral density (BMD) measurement techniques. To further the applicability of model, we performed a parameter sensitivity study aimed at identifying those parameter uncertainties that most effect the model forecasts in order to determine what areas of the model needed enhancements for reducing uncertainty. Methods: The bone fracture risk model (BFxRM), originally published in (Nelson et al) is a probabilistic model that can assess the risk of astronaut bone fracture. This is accomplished by utilizing biomechanical models to assess the applied loads; utilizing models of spaceflight BMD loss in at-risk skeletal locations; quantifying bone strength through a relationship between areal BMD and bone failure load; and relating fracture risk index (FRI), the ratio of applied load to bone strength, to fracture probability. There are many factors associated with these calculations including environmental factors, factors associated with the fall event, mass and anthropometric values of the astronaut, BMD characteristics, characteristics of the relationship between BMD and bone strength and bone fracture characteristics. The uncertainty in these factors is captured through the use of parameter distributions and the fracture predictions are probability distributions with a mean value and an associated uncertainty. To determine parameter sensitivity, a correlation coefficient is found between the sample set of each model parameter and the calculated fracture probabilities. Each parameters contribution to the variance is found by squaring the correlation coefficients, dividing by the sum of the squared correlation coefficients, and multiplying by 100. Results: Sensitivity analyses of BFxRM simulations of preflight, 0 days post-flight and 365 days post-flight falls onto the hip revealed a subset of the twelve factors within the model which cause the most variation in the fracture predictions. These factors include the spring constant used in the hip biomechanical model, the midpoint FRI parameter within the equation used to convert FRI to fracture probability and preflight BMD values. Future work: Plans are underway to update the BFxRM by incorporating bone strength information from finite element models (FEM) into the bone strength portion of the BFxRM. Also, FEM bone strength information along with fracture outcome data will be incorporated into the FRI to fracture probability.
Oakley, Jeremy E.; Brennan, Alan; Breeze, Penny
2015-01-01
Health economic decision-analytic models are used to estimate the expected net benefits of competing decision options. The true values of the input parameters of such models are rarely known with certainty, and it is often useful to quantify the value to the decision maker of reducing uncertainty through collecting new data. In the context of a particular decision problem, the value of a proposed research design can be quantified by its expected value of sample information (EVSI). EVSI is commonly estimated via a 2-level Monte Carlo procedure in which plausible data sets are generated in an outer loop, and then, conditional on these, the parameters of the decision model are updated via Bayes rule and sampled in an inner loop. At each iteration of the inner loop, the decision model is evaluated. This is computationally demanding and may be difficult if the posterior distribution of the model parameters conditional on sampled data is hard to sample from. We describe a fast nonparametric regression-based method for estimating per-patient EVSI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method avoids the need to sample from the posterior distributions of the parameters and avoids the need to rerun the model. The only requirement is that sample data sets can be generated. The method is applicable with a model of any complexity and with any specification of model parameter distribution. We demonstrate in a case study the superior efficiency of the regression method over the 2-level Monte Carlo method. PMID:25810269
Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry
2018-06-19
Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.
Sumner, T; Shephard, E; Bogle, I D L
2012-09-07
One of the main challenges in the development of mathematical and computational models of biological systems is the precise estimation of parameter values. Understanding the effects of uncertainties in parameter values on model behaviour is crucial to the successful use of these models. Global sensitivity analysis (SA) can be used to quantify the variability in model predictions resulting from the uncertainty in multiple parameters and to shed light on the biological mechanisms driving system behaviour. We present a new methodology for global SA in systems biology which is computationally efficient and can be used to identify the key parameters and their interactions which drive the dynamic behaviour of a complex biological model. The approach combines functional principal component analysis with established global SA techniques. The methodology is applied to a model of the insulin signalling pathway, defects of which are a major cause of type 2 diabetes and a number of key features of the system are identified.
A Workflow for Global Sensitivity Analysis of PBPK Models
McNally, Kevin; Cotton, Richard; Loizou, George D.
2011-01-01
Physiologically based pharmacokinetic (PBPK) models have a potentially significant role in the development of a reliable predictive toxicity testing strategy. The structure of PBPK models are ideal frameworks into which disparate in vitro and in vivo data can be integrated and utilized to translate information generated, using alternative to animal measures of toxicity and human biological monitoring data, into plausible corresponding exposures. However, these models invariably include the description of well known non-linear biological processes such as, enzyme saturation and interactions between parameters such as, organ mass and body mass. Therefore, an appropriate sensitivity analysis (SA) technique is required which can quantify the influences associated with individual parameters, interactions between parameters and any non-linear processes. In this report we have defined the elements of a workflow for SA of PBPK models that is computationally feasible, accounts for interactions between parameters, and can be displayed in the form of a bar chart and cumulative sum line (Lowry plot), which we believe is intuitive and appropriate for toxicologists, risk assessors, and regulators. PMID:21772819
Probabilistic structural analysis of a truss typical for space station
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.
1990-01-01
A three-bay, space, cantilever truss is probabilistically evaluated using the computer code NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) to identify and quantify the uncertainties and respective sensitivities associated with corresponding uncertainties in the primitive variables (structural, material, and loads parameters) that defines the truss. The distribution of each of these primitive variables is described in terms of one of several available distributions such as the Weibull, exponential, normal, log-normal, etc. The cumulative distribution function (CDF's) for the response functions considered and sensitivities associated with the primitive variables for given response are investigated. These sensitivities help in determining the dominating primitive variables for that response.
Parameter Uncertainty Analysis Using Monte Carlo Simulations for a Regional-Scale Groundwater Model
NASA Astrophysics Data System (ADS)
Zhang, Y.; Pohlmann, K.
2016-12-01
Regional-scale grid-based groundwater models for flow and transport often contain multiple types of parameters that can intensify the challenge of parameter uncertainty analysis. We propose a Monte Carlo approach to systematically quantify the influence of various types of model parameters on groundwater flux and contaminant travel times. The Monte Carlo simulations were conducted based on the steady-state conversion of the original transient model, which was then combined with the PEST sensitivity analysis tool SENSAN and particle tracking software MODPATH. Results identified hydrogeologic units whose hydraulic conductivity can significantly affect groundwater flux, and thirteen out of 173 model parameters that can cause large variation in travel times for contaminant particles originating from given source zones.
Model-based POD study of manual ultrasound inspection and sensitivity analysis using metamodel
NASA Astrophysics Data System (ADS)
Ribay, Guillemette; Artusi, Xavier; Jenson, Frédéric; Reece, Christopher; Lhuillier, Pierre-Emile
2016-02-01
The reliability of NDE can be quantified by using the Probability of Detection (POD) approach. Former studies have shown the potential of the model-assisted POD (MAPOD) approach to replace expensive experimental determination of POD curves. In this paper, we make use of CIVA software to determine POD curves for a manual ultrasonic inspection of a heavy component, for which a whole experimental POD campaign was not available. The influential parameters were determined by expert analysis. The semi-analytical models used in CIVA for wave propagation and beam-defect interaction have been validated in the range of variation of the influential parameters by comparison with finite element modelling (Athena). The POD curves are computed for « hit/miss » and « â versus a » analysis. The verification of Berens hypothesis is evaluated by statistical tools. A sensitivity study is performed to measure the relative influence of parameters on the defect response amplitude variance, using the Sobol sensitivity index. A meta-model is also built to reduce computing cost and enhance the precision of estimated index.
Ultra-sensitive probe of spectral line structure and detection of isotopic oxygen
NASA Astrophysics Data System (ADS)
Garner, Richard M.; Dharamsi, A. N.; Khan, M. Amir
2018-01-01
We discuss a new method of investigating and obtaining quantitative behavior of higher harmonic (> 2f) wavelength modulation spectroscopy (WMS) based on the signal structure. It is shown that the spectral structure of higher harmonic WMS signals, quantified by the number of zero crossings and turnings points, can have increased sensitivity to ambient conditions or line-broadening effects from changes in temperature, pressure, or optical depth. The structure of WMS signals, characterized by combinations of signal magnitude and spectral locations of turning points and zero crossings, provides a unique scale that quantifies lineshape parameters and, thus, useful in optimization of measurements obtained from multi-harmonic WMS signals. We demonstrate this by detecting weaker rotational-vibrational transitions of isotopic atmospheric oxygen (16O18O) in the near-infrared region where higher harmonic WMS signals are more sensitive contrary to their signal-to-noise ratio considerations. The proposed approach based on spectral structure provides the ability to investigate and quantify signals not only at linecenter but also in the wing region of the absorption profile. This formulation is particularly useful in tunable diode laser spectroscopy and ultra-precision laser-based sensors where absorption signal profile carries information of quantities of interest, e.g., concentration, velocity, or gas collision dynamics, etc.
Sun, Ying; Gu, Lianhong; Dickinson, Robert E; Pallardy, Stephen G; Baker, John; Cao, Yonghui; DaMatta, Fábio Murilo; Dong, Xuejun; Ellsworth, David; Van Goethem, Davina; Jensen, Anna M; Law, Beverly E; Loos, Rodolfo; Martins, Samuel C Vitor; Norby, Richard J; Warren, Jeffrey; Weston, David; Winter, Klaus
2014-04-01
Worldwide measurements of nearly 130 C3 species covering all major plant functional types are analysed in conjunction with model simulations to determine the effects of mesophyll conductance (g(m)) on photosynthetic parameters and their relationships estimated from A/Ci curves. We find that an assumption of infinite g(m) results in up to 75% underestimation for maximum carboxylation rate V(cmax), 60% for maximum electron transport rate J(max), and 40% for triose phosphate utilization rate T(u) . V(cmax) is most sensitive, J(max) is less sensitive, and T(u) has the least sensitivity to the variation of g(m). Because of this asymmetrical effect of g(m), the ratios of J(max) to V(cmax), T(u) to V(cmax) and T(u) to J(max) are all overestimated. An infinite g(m) assumption also limits the freedom of variation of estimated parameters and artificially constrains parameter relationships to stronger shapes. These findings suggest the importance of quantifying g(m) for understanding in situ photosynthetic machinery functioning. We show that a nonzero resistance to CO2 movement in chloroplasts has small effects on estimated parameters. A non-linear function with gm as input is developed to convert the parameters estimated under an assumption of infinite gm to proper values. This function will facilitate gm representation in global carbon cycle models. © 2013 John Wiley & Sons Ltd.
Can nudging be used to quantify model sensitivities in precipitation and cloud forcing?
NASA Astrophysics Data System (ADS)
Lin, Guangxing; Wan, Hui; Zhang, Kai; Qian, Yun; Ghan, Steven J.
2016-09-01
Efficient simulation strategies are crucial for the development and evaluation of high-resolution climate models. This paper evaluates simulations with constrained meteorology for the quantification of parametric sensitivities in the Community Atmosphere Model version 5 (CAM5). Two parameters are perturbed as illustrating examples: the convection relaxation time scale (TAU), and the threshold relative humidity for the formation of low-level stratiform clouds (rhminl). Results suggest that the fidelity of the constrained simulations depends on the detailed implementation of nudging and the mechanism through which the perturbed parameter affects precipitation and cloud. The relative computational costs of nudged and free-running simulations are determined by the magnitude of internal variability in the physical quantities of interest, as well as the magnitude of the parameter perturbation. In the case of a strong perturbation in convection, temperature, and/or wind nudging with a 6 h relaxation time scale leads to nonnegligible side effects due to the distorted interactions between resolved dynamics and parameterized convection, while 1 year free-running simulations can satisfactorily capture the annual mean precipitation and cloud forcing sensitivities. In the case of a relatively weak perturbation in the large-scale condensation scheme, results from 1 year free-running simulations are strongly affected by natural noise, while nudging winds effectively reduces the noise, and reasonably reproduces the sensitivities. These results indicate that caution is needed when using nudged simulations to assess precipitation and cloud forcing sensitivities to parameter changes in general circulation models. We also demonstrate that ensembles of short simulations are useful for understanding the evolution of model sensitivities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peace, Gerald; Goering, Timothy James; Miller, Mark Laverne
2007-01-01
A probabilistic performance assessment has been conducted to evaluate the fate and transport of radionuclides (americium-241, cesium-137, cobalt-60, plutonium-238, plutonium-239, radium-226, radon-222, strontium-90, thorium-232, tritium, uranium-238), heavy metals (lead and cadmium), and volatile organic compounds (VOCs) at the Mixed Waste Landfill (MWL). Probabilistic analyses were performed to quantify uncertainties inherent in the system and models for a 1,000-year period, and sensitivity analyses were performed to identify parameters and processes that were most important to the simulated performance metrics. Comparisons between simulated results and measured values at the MWL were made to gain confidence in the models and perform calibrations whenmore » data were available. In addition, long-term monitoring requirements and triggers were recommended based on the results of the quantified uncertainty and sensitivity analyses.« less
Local sensitivity analysis for inverse problems solved by singular value decomposition
Hill, M.C.; Nolan, B.T.
2010-01-01
Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.
NASA Astrophysics Data System (ADS)
Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.
2002-05-01
Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more information by quantifying the relative importance of each input parameter in predicting the model response. However, in these complex, high dimensional eco-system models, represented by the RWMS model, the dynamics of the systems can act in a non-linear manner. Quantitatively assessing the importance of input variables becomes more difficult as the dimensionality, the non-linearities, and the non-monotonicities of the model increase. Methods from data mining such as Multivariate Adaptive Regression Splines (MARS) and the Fourier Amplitude Sensitivity Test (FAST) provide tools that can be used in global sensitivity analysis in these high dimensional, non-linear situations. The enhanced interpretability of model output provided by the quantitative measures estimated by these global sensitivity analysis tools will be demonstrated using the RWMS model.
NASA Astrophysics Data System (ADS)
Lekmine, G.; Auradou, H.; Pessel, M.; Rayner, J. L.
2017-04-01
Cross-borehole ERT imaging was tested to quantify the average velocity and transport parameters of tracer plumes in saturated porous media. Seven tracer tests were performed at different flow rates and monitored by either a vertical or horizontal dipole-dipole ERT sequence. These sequences were tested to reconstruct the shape and temporally follow the spread of the tracer plumes through a background regularization procedure. Data sets were inverted with the same inversion parameters and 2D model sections of resistivity ratios were converted to tracer concentrations. Both array types provided an accurate estimation of the average pore velocity vz. The total mass Mtot recovered was always overestimated by the horizontal dipole-dipole and underestimated by the vertical dipole-dipole. The vertical dipole-dipole was however reliable to quantify the longitudinal dispersivity λz, while the horizontal dipole-dipole returned better estimation for the transverse component λx. λ and Mtot were mainly influenced by the 2D distribution of the cumulated electrical sensitivity and the Shadow Effects induced by the third dimension. The size reduction of the edge of the plume was also related to the inability of the inversion process to reconstruct sharp resistivity contrasts at the interface. Smoothing was counterbalanced by a non-realistic rise of the ERT concentrations around the centre of mass returning overpredicted total masses. A sensitivity analysis on the cementation factor m and the porosity ϕ demonstrated that a change in one of these parameters by 8% involved non negligible variations by 30 and 40% of the dispersion coefficients and mass recovery.
Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen
2006-01-01
This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con
NASA Astrophysics Data System (ADS)
Bell, A.; Hioki, S.; Wang, Y.; Yang, P.; Di Girolamo, L.
2016-12-01
Previous studies found that including ice particle surface roughness in forward light scattering calculations significantly reduces the differences between observed and simulated polarimetric and radiometric observations. While it is suggested that some degree of roughness is desirable, the appropriate degree of surface roughness to be assumed in operational cloud property retrievals and the sensitivity of retrieval products to this assumption remains uncertain. In an effort to extricate this ambiguity, we will present a sensitivity analysis of space-borne multi-angle observations of reflectivity, to varying degrees of surface roughness. This process is two fold. First, sampling information and statistics of Multi-angle Imaging SpectroRadiometer (MISR) sensor data aboard the Terra platform, will be used to define the most coming viewing observation geometries. Using these defined geometries, reflectivity will be simulated for multiple degrees of roughness using results from adding-doubling radiative transfer simulations. Sensitivity of simulated reflectivity to surface roughness can then be quantified, thus yielding a more robust retrieval system. Secondly, sensitivity of the inverse problem will be analyzed. Spherical albedo values will be computed by feeding blocks of MISR data comprising cloudy pixels over ocean into the retrieval system, with assumed values of surface roughness. The sensitivity of spherical albedo to the inclusion of surface roughness can then be quantified, and the accuracy of retrieved parameters can be determined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Ben; Qian, Yun; Berg, Larry K.
We evaluate the sensitivity of simulated turbine-height winds to 26 parameters applied in a planetary boundary layer (PBL) scheme and a surface layer scheme of the Weather Research and Forecasting (WRF) model over an area of complex terrain during the Columbia Basin Wind Energy Study. An efficient sampling algorithm and a generalized linear model are used to explore the multiple-dimensional parameter space and quantify the parametric sensitivity of modeled turbine-height winds. The results indicate that most of the variability in the ensemble simulations is contributed by parameters related to the dissipation of the turbulence kinetic energy (TKE), Prandtl number, turbulencemore » length scales, surface roughness, and the von Kármán constant. The relative contributions of individual parameters are found to be dependent on both the terrain slope and atmospheric stability. The parameter associated with the TKE dissipation rate is found to be the most important one, and a larger dissipation rate can produce larger hub-height winds. A larger Prandtl number results in weaker nighttime winds. Increasing surface roughness reduces the frequencies of both extremely weak and strong winds, implying a reduction in the variability of the wind speed. All of the above parameters can significantly affect the vertical profiles of wind speed, the altitude of the low-level jet and the magnitude of the wind shear strength. The wind direction is found to be modulated by the same subset of influential parameters. Remainder of abstract is in attachment.« less
Modeling the atmospheric chemistry of TICs
NASA Astrophysics Data System (ADS)
Henley, Michael V.; Burns, Douglas S.; Chynwat, Veeradej; Moore, William; Plitz, Angela; Rottmann, Shawn; Hearn, John
2009-05-01
An atmospheric chemistry model that describes the behavior and disposition of environmentally hazardous compounds discharged into the atmosphere was coupled with the transport and diffusion model, SCIPUFF. The atmospheric chemistry model was developed by reducing a detailed atmospheric chemistry mechanism to a simple empirical effective degradation rate term (keff) that is a function of important meteorological parameters such as solar flux, temperature, and cloud cover. Empirically derived keff functions that describe the degradation of target toxic industrial chemicals (TICs) were derived by statistically analyzing data generated from the detailed chemistry mechanism run over a wide range of (typical) atmospheric conditions. To assess and identify areas to improve the developed atmospheric chemistry model, sensitivity and uncertainty analyses were performed to (1) quantify the sensitivity of the model output (TIC concentrations) with respect to changes in the input parameters and (2) improve, where necessary, the quality of the input data based on sensitivity results. The model predictions were evaluated against experimental data. Chamber data were used to remove the complexities of dispersion in the atmosphere.
Development of MRM-based assays for the absolute quantitation of plasma proteins.
Kuzyk, Michael A; Parker, Carol E; Domanski, Dominik; Borchers, Christoph H
2013-01-01
Multiple reaction monitoring (MRM), sometimes called selected reaction monitoring (SRM), is a directed tandem mass spectrometric technique performed on to triple quadrupole mass spectrometers. MRM assays can be used to sensitively and specifically quantify proteins based on peptides that are specific to the target protein. Stable-isotope-labeled standard peptide analogues (SIS peptides) of target peptides are added to enzymatic digests of samples, and quantified along with the native peptides during MRM analysis. Monitoring of the intact peptide and a collision-induced fragment of this peptide (an ion pair) can be used to provide information on the absolute peptide concentration of the peptide in the sample and, by inference, the concentration of the intact protein. This technique provides high specificity by selecting for biophysical parameters that are unique to the target peptides: (1) the molecular weight of the peptide, (2) the generation of a specific fragment from the peptide, and (3) the HPLC retention time during LC/MRM-MS analysis. MRM is a highly sensitive technique that has been shown to be capable of detecting attomole levels of target peptides in complex samples such as tryptic digests of human plasma. This chapter provides a detailed description of how to develop and use an MRM protein assay. It includes sections on the critical "first step" of selecting the target peptides, as well as optimization of MRM acquisition parameters for maximum sensitivity of the ion pairs that will be used in the final method, and characterization of the final MRM assay.
Issues in Turbulence Simulation for Experimental Comparison
NASA Astrophysics Data System (ADS)
Ross, D. W.; Bravenec, R. V.; Dorland, W.; Beer, M. A.; Hammett, G. W.
1999-11-01
Studies of the sensitivity of fluctuation spectra and transport fluxes to local plasma parameters and gradients(D. W. Ross et al.), Bull. Am. Phys. Soc. 43, 1760 (1998); D. W. Ross et al., Transport Task Force Workshop, Portland, Oregon, (1999). are continued using nonlinear gyrofluid simulation(M. A. Beer et al.), Phys. Plasmas 2, 2687 (1995). on the T3E at NERSC. Parameters that are characteristic of discharges in DIII-D and Alcator C-Mod are employed. In the previous work, the gradients of Z_eff, n_e, and Te were varied within the experimental uncertainty. Amplitudes and fluxes are quite sensitive to dZ_eff/dr. Here, these studies are continued and extended to variation of other parameters, including T_e/T_i, and dT_i/dr, which are important for ion temperature gradient modes. The role of electric field shear is discussed. Implications for comparison with experiment, including transient perturbations, are discussed, with the goal of quantifying the accuracy of profile data needed to verify the turbulence theory.
Sánchez-Canales, M; López-Benito, A; Acuña, V; Ziv, G; Hamel, P; Chaplin-Kramer, R; Elorza, F J
2015-01-01
Climate change and land-use change are major factors influencing sediment dynamics. Models can be used to better understand sediment production and retention by the landscape, although their interpretation is limited by large uncertainties, including model parameter uncertainties. The uncertainties related to parameter selection may be significant and need to be quantified to improve model interpretation for watershed management. In this study, we performed a sensitivity analysis of the InVEST (Integrated Valuation of Environmental Services and Tradeoffs) sediment retention model in order to determine which model parameters had the greatest influence on model outputs, and therefore require special attention during calibration. The estimation of the sediment loads in this model is based on the Universal Soil Loss Equation (USLE). The sensitivity analysis was performed in the Llobregat basin (NE Iberian Peninsula) for exported and retained sediment, which support two different ecosystem service benefits (avoided reservoir sedimentation and improved water quality). Our analysis identified the model parameters related to the natural environment as the most influential for sediment export and retention. Accordingly, small changes in variables such as the magnitude and frequency of extreme rainfall events could cause major changes in sediment dynamics, demonstrating the sensitivity of these dynamics to climate change in Mediterranean basins. Parameters directly related to human activities and decisions (such as cover management factor, C) were also influential, especially for sediment exported. The importance of these human-related parameters in the sediment export process suggests that mitigation measures have the potential to at least partially ameliorate climate-change driven changes in sediment exportation. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Gao, C.; Lekic, V.
2016-12-01
When constraining the structure of the Earth's continental lithosphere, multiple seismic observables are often combined due to their complementary sensitivities.The transdimensional Bayesian (TB) approach in seismic inversion allows model parameter uncertainties and trade-offs to be quantified with few assumptions. TB sampling yields an adaptive parameterization that enables simultaneous inversion for different model parameters (Vp, Vs, density, radial anisotropy), without the need for strong prior information or regularization. We use a reversible jump Markov chain Monte Carlo (rjMcMC) algorithm to incorporate different seismic observables - surface wave dispersion (SWD), Rayleigh wave ellipticity (ZH ratio), and receiver functions - into the inversion for the profiles of shear velocity (Vs), compressional velocity (Vp), density (ρ), and radial anisotropy (ξ) beneath a seismic station. By analyzing all three data types individually and together, we show that TB sampling can eliminate the need for a fixed parameterization based on prior information, and reduce trade-offs in model estimates. We then explore the effect of different types of misfit functions for receiver function inversion, which is a highly non-unique problem. We compare the synthetic inversion results using the L2 norm, cross-correlation type and integral type misfit function by their convergence rates and retrieved seismic structures. In inversions in which only one type of model parameter (Vs for the case of SWD) is inverted, assumed scaling relationships are often applied to account for sensitivity to other model parameters (e.g. Vp, ρ, ξ). Here we show that under a TB framework, we can eliminate scaling assumptions, while simultaneously constraining multiple model parameters to varying degrees. Furthermore, we compare the performance of TB inversion when different types of model parameters either share the same or use independent parameterizations. We show that different parameterizations can lead to differences in retrieved model parameters, consistent with limited data constraints. We then quantitatively examine the model parameter trade-offs and find that trade-offs between Vp and radial anisotropy might limit our ability to constrain shallow-layer radial anisotropy using current seismic observables.
Flassig, Robert J; Migal, Iryna; der Zalm, Esther van; Rihko-Struckmann, Liisa; Sundmacher, Kai
2015-01-16
Understanding the dynamics of biological processes can substantially be supported by computational models in the form of nonlinear ordinary differential equations (ODE). Typically, this model class contains many unknown parameters, which are estimated from inadequate and noisy data. Depending on the ODE structure, predictions based on unmeasured states and associated parameters are highly uncertain, even undetermined. For given data, profile likelihood analysis has been proven to be one of the most practically relevant approaches for analyzing the identifiability of an ODE structure, and thus model predictions. In case of highly uncertain or non-identifiable parameters, rational experimental design based on various approaches has shown to significantly reduce parameter uncertainties with minimal amount of effort. In this work we illustrate how to use profile likelihood samples for quantifying the individual contribution of parameter uncertainty to prediction uncertainty. For the uncertainty quantification we introduce the profile likelihood sensitivity (PLS) index. Additionally, for the case of several uncertain parameters, we introduce the PLS entropy to quantify individual contributions to the overall prediction uncertainty. We show how to use these two criteria as an experimental design objective for selecting new, informative readouts in combination with intervention site identification. The characteristics of the proposed multi-criterion objective are illustrated with an in silico example. We further illustrate how an existing practically non-identifiable model for the chlorophyll fluorescence induction in a photosynthetic organism, D. salina, can be rendered identifiable by additional experiments with new readouts. Having data and profile likelihood samples at hand, the here proposed uncertainty quantification based on prediction samples from the profile likelihood provides a simple way for determining individual contributions of parameter uncertainties to uncertainties in model predictions. The uncertainty quantification of specific model predictions allows identifying regions, where model predictions have to be considered with care. Such uncertain regions can be used for a rational experimental design to render initially highly uncertain model predictions into certainty. Finally, our uncertainty quantification directly accounts for parameter interdependencies and parameter sensitivities of the specific prediction.
Failure analysis of parameter-induced simulation crashes in climate models
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y.
2013-01-01
Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We apply support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicts model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures are determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations are the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.
Failure analysis of parameter-induced simulation crashes in climate models
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y.
2013-08-01
Simulations using IPCC (Intergovernmental Panel on Climate Change)-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We applied support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicted model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures were determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations were the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.
NASA Astrophysics Data System (ADS)
Akinci, A.; Pace, B.
2017-12-01
In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.
Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes
NASA Astrophysics Data System (ADS)
Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris
2017-12-01
Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.
Vulnerability of manned spacecraft to crew loss from orbital debris penetration
NASA Technical Reports Server (NTRS)
Williamsen, J. E.
1994-01-01
Orbital debris growth threatens the survival of spacecraft systems from impact-induced failures. Whereas the probability of debris impact and spacecraft penetration may currently be calculated, another parameter of great interest to safety engineers is the probability that debris penetration will cause actual spacecraft or crew loss. Quantifying the likelihood of crew loss following a penetration allows spacecraft designers to identify those design features and crew operational protocols that offer the highest improvement in crew safety for available resources. Within this study, a manned spacecraft crew survivability (MSCSurv) computer model is developed that quantifies the conditional probability of losing one or more crew members, P(sub loss/pen), following the remote likelihood of an orbital debris penetration into an eight module space station. Contributions to P(sub loss/pen) are quantified from three significant penetration-induced hazards: pressure wall rupture (explosive decompression), fragment-induced injury, and 'slow' depressurization. Sensitivity analyses are performed using alternate assumptions for hazard-generating functions, crew vulnerability thresholds, and selected spacecraft design and crew operations parameters. These results are then used to recommend modifications to the spacecraft design and expected crew operations that quantitatively increase crew safety from orbital debris impacts.
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
Finding the bottom and using it
Sandoval, Ruben M.; Wang, Exing; Molitoris, Bruce A.
2014-01-01
Maximizing 2-photon parameters used in acquiring images for quantitative intravital microscopy, especially when high sensitivity is required, remains an open area of investigation. Here we present data on correctly setting the black level of the photomultiplier tube amplifier by adjusting the offset to allow for accurate quantitation of low intensity processes. When the black level is set too high some low intensity pixel values become zero and a nonlinear degradation in sensitivity occurs rendering otherwise quantifiable low intensity values virtually undetectable. Initial studies using a series of increasing offsets for a sequence of concentrations of fluorescent albumin in vitro revealed a loss of sensitivity for higher offsets at lower albumin concentrations. A similar decrease in sensitivity, and therefore the ability to correctly determine the glomerular permeability coefficient of albumin, occurred in vivo at higher offset. Finding the offset that yields accurate and linear data are essential for quantitative analysis when high sensitivity is required. PMID:25313346
Multiscale contact mechanics model for RF-MEMS switches with quantified uncertainties
NASA Astrophysics Data System (ADS)
Kim, Hojin; Huda Shaik, Nurul; Xu, Xin; Raman, Arvind; Strachan, Alejandro
2013-12-01
We introduce a multiscale model for contact mechanics between rough surfaces and apply it to characterize the force-displacement relationship for a metal-dielectric contact relevant for radio frequency micro-electromechanicl system (MEMS) switches. We propose a mesoscale model to describe the history-dependent force-displacement relationships in terms of the surface roughness, the long-range attractive interaction between the two surfaces, and the repulsive interaction between contacting asperities (including elastic and plastic deformation). The inputs to this model are the experimentally determined surface topography and the Hamaker constant as well as the mechanical response of individual asperities obtained from density functional theory calculations and large-scale molecular dynamics simulations. The model captures non-trivial processes including the hysteresis during loading and unloading due to plastic deformation, yet it is computationally efficient enough to enable extensive uncertainty quantification and sensitivity analysis. We quantify how uncertainties and variability in the input parameters, both experimental and theoretical, affect the force-displacement curves during approach and retraction. In addition, a sensitivity analysis quantifies the relative importance of the various input quantities for the prediction of force-displacement during contact closing and opening. The resulting force-displacement curves with quantified uncertainties can be directly used in device-level simulations of micro-switches and enable the incorporation of atomic and mesoscale phenomena in predictive device-scale simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harmon, S; Jeraj, R; Galavis, P
Purpose: Sensitivity of PET-derived texture features to reconstruction methods has been reported for features extracted from axial planes; however, studies often utilize three dimensional techniques. This work aims to quantify the impact of multi-plane (3D) vs. single-plane (2D) feature extraction on radiomics-based analysis, including sensitivity to reconstruction parameters and potential loss of spatial information. Methods: Twenty-three patients with solid tumors underwent [{sup 18}F]FDG PET/CT scans under identical protocols. PET data were reconstructed using five sets of reconstruction parameters. Tumors were segmented using an automatic, in-house algorithm robust to reconstruction variations. 50 texture features were extracted using two Methods: 2D patchesmore » along axial planes and 3D patches. For each method, sensitivity of features to reconstruction parameters was calculated as percent difference relative to the average value across reconstructions. Correlations between feature values were compared when using 2D and 3D extraction. Results: 21/50 features showed significantly different sensitivity to reconstruction parameters when extracted in 2D vs 3D (wilcoxon α<0.05), assessed by overall range of variation, Rangevar(%). Eleven showed greater sensitivity to reconstruction in 2D extraction, primarily first-order and co-occurrence features (average Rangevar increase 83%). The remaining ten showed higher variation in 3D extraction (average Range{sub var}increase 27%), mainly co-occurence and greylevel run-length features. Correlation of feature value extracted in 2D and feature value extracted in 3D was poor (R<0.5) in 12/50 features, including eight co-occurrence features. Feature-to-feature correlations in 2D were marginally higher than 3D, ∣R∣>0.8 in 16% and 13% of all feature combinations, respectively. Larger sensitivity to reconstruction parameters were seen for inter-feature correlation in 2D(σ=6%) than 3D (σ<1%) extraction. Conclusion: Sensitivity and correlation of various texture features were shown to significantly differ between 2D and 3D extraction. Additionally, inter-feature correlations were more sensitive to reconstruction variation using single-plane extraction. This work highlights a need for standardized feature extraction/selection techniques in radiomics.« less
Calculating second derivatives of population growth rates for ecology and evolution
Shyu, Esther; Caswell, Hal
2014-01-01
1. Second derivatives of the population growth rate measure the curvature of its response to demographic, physiological or environmental parameters. The second derivatives quantify the response of sensitivity results to perturbations, provide a classification of types of selection and provide one way to calculate sensitivities of the stochastic growth rate. 2. Using matrix calculus, we derive the second derivatives of three population growth rate measures: the discrete-time growth rate λ, the continuous-time growth rate r = log λ and the net reproductive rate R0, which measures per-generation growth. 3. We present a suite of formulae for the second derivatives of each growth rate and show how to compute these derivatives with respect to projection matrix entries and to lower-level parameters affecting those matrix entries. 4. We also illustrate several ecological and evolutionary applications for these second derivative calculations with a case study for the tropical herb Calathea ovandensis. PMID:25793101
NASA Astrophysics Data System (ADS)
Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.
2017-02-01
This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic structural FE models of a bridge pier and a moment resisting steel frame, are performed to validate the performance and accuracy of the presented nonlinear FE model updating approach and demonstrate its application to SHM. These validation studies show the excellent performance of the proposed framework for SHM and damage identification even in the presence of high measurement noise and/or way-out initial estimates of the model parameters. Furthermore, the detrimental effects of the input measurement noise on the performance of the proposed framework are illustrated and quantified through one of the validation studies.
Huang, Mugen; Luo, Jiaowan; Hu, Linchao; Zheng, Bo; Yu, Jianshe
2017-12-14
To suppress wild population of Aedes mosquitoes, the primary transmission vector of life-threatening diseases such as dengue, malaria, and Zika, an innovative strategy is to release male mosquitoes carrying the bacterium Wolbachia into natural areas to drive female sterility by cytoplasmic incompatibility. We develop a model of delay differential equations, incorporating the strong density restriction in the larval stage, to assess the delicate impact of life table parameters on suppression efficiency. Through mathematical analysis, we find the sufficient and necessary condition for global stability of the complete suppression state. This condition, combined with the experimental data for Aedes albopictus population in Guangzhou, helps us predict a large range of releasing intensities for suppression success. In particular, we find that if the number of released infected males is no less than four times the number of mosquitoes in wild areas, then the mosquito density in the peak season can be reduced by 95%. We introduce an index to quantify the dependence of suppression efficiency on parameters. The invariance of some quantitative properties of the index values under various perturbations of the same parameter justifies the applicability of this index, and the robustness of our modeling approach. The index yields a ranking of the sensitivity of all parameters, among which the adult mortality has the highest sensitivity and is considerably more sensitive than the natural larvae mortality. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Podgornova, O.; Leaney, S.; Liang, L.
2018-07-01
Extracting medium properties from seismic data faces some limitations due to the finite frequency content of the data and restricted spatial positions of the sources and receivers. Some distributions of the medium properties make low impact on the data (including none). If these properties are used as the inversion parameters, then the inverse problem becomes overparametrized, leading to ambiguous results. We present an analysis of multiparameter resolution for the linearized inverse problem in the framework of elastic full-waveform inversion. We show that the spatial and multiparameter sensitivities are intertwined and non-sensitive properties are spatial distributions of some non-trivial combinations of the conventional elastic parameters. The analysis accounts for the Hessian information and frequency content of the data; it is semi-analytical (in some scenarios analytical), easy to interpret and enhances results of the widely used radiation pattern analysis. Single-type scattering is shown to have limited sensitivity, even for full-aperture data. Finite-frequency data lose multiparameter sensitivity at smooth and fine spatial scales. Also, we establish ways to quantify a spatial-multiparameter coupling and demonstrate that the theoretical predictions agree well with the numerical results.
Optical guidance vidicon test program
NASA Technical Reports Server (NTRS)
Eiseman, A. R.; Stanton, R. H.; Voge, C. C.
1976-01-01
A laboratory and field test program was conducted to quantify the optical navigation parameters of the Mariner vidicons. A scene simulator and a camera were designed and built for vidicon tests under a wide variety of conditions. Laboratory tests characterized error sources important to the optical navigation process and field tests verified star sensitivity and characterized comet optical guidance parameters. The equipment, tests and data reduction techniques used are described. Key test results are listed. A substantial increase in the understanding of the use of selenium vidicons as detectors for spacecraft optical guidance was achieved, indicating a reduction in residual offset errors by a factor of two to four to the single pixel level.
Integrated cosmological probes: concordance quantified
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicola, Andrina; Amara, Adam; Refregier, Alexandre, E-mail: andrina.nicola@phys.ethz.ch, E-mail: adam.amara@phys.ethz.ch, E-mail: alexandre.refregier@phys.ethz.ch
2017-10-01
Assessing the consistency of parameter constraints derived from different cosmological probes is an important way to test the validity of the underlying cosmological model. In an earlier work [1], we computed constraints on cosmological parameters for ΛCDM from an integrated analysis of CMB temperature anisotropies and CMB lensing from Planck, galaxy clustering and weak lensing from SDSS, weak lensing from DES SV as well as Type Ia supernovae and Hubble parameter measurements. In this work, we extend this analysis and quantify the concordance between the derived constraints and those derived by the Planck Collaboration as well as WMAP9, SPT andmore » ACT. As a measure for consistency, we use the Surprise statistic [2], which is based on the relative entropy. In the framework of a flat ΛCDM cosmological model, we find all data sets to be consistent with one another at a level of less than 1σ. We highlight that the relative entropy is sensitive to inconsistencies in the models that are used in different parts of the analysis. In particular, inconsistent assumptions for the neutrino mass break its invariance on the parameter choice. When consistent model assumptions are used, the data sets considered in this work all agree with each other and ΛCDM, without evidence for tensions.« less
NASA Astrophysics Data System (ADS)
Kouznetsova, I.; Gerhard, J. I.; Mao, X.; Barry, D. A.; Robinson, C.; Brovelli, A.; Harkness, M.; Fisher, A.; Mack, E. E.; Payne, J. A.; Dworatzek, S.; Roberts, J.
2008-12-01
A detailed model to simulate trichloroethene (TCE) dechlorination in anaerobic groundwater systems has been developed and implemented through PHAST, a robust and flexible geochemical modeling platform. The approach is comprehensive but retains flexibility such that models of varying complexity can be used to simulate TCE biodegradation in the vicinity of nonaqueous phase liquid (NAPL) source zones. The complete model considers a full suite of biological (e.g., dechlorination, fermentation, sulfate and iron reduction, electron donor competition, toxic inhibition, pH inhibition), physical (e.g., flow and mass transfer) and geochemical processes (e.g., pH modulation, gas formation, mineral interactions). Example simulations with the model demonstrated that the feedback between biological, physical, and geochemical processes is critical. Successful simulation of a thirty-two-month column experiment with site soil, complex groundwater chemistry, and exhibiting both anaerobic dechlorination and endogenous respiration, provided confidence in the modeling approach. A comprehensive suite of batch simulations was then conducted to estimate the sensitivity of predicted TCE degradation to the 36 model input parameters. A local sensitivity analysis was first employed to rank the importance of parameters, revealing that 5 parameters consistently dominated model predictions across a range of performance metrics. A global sensitivity analysis was then performed to evaluate the influence of a variety of full parameter data sets available in the literature. The modeling study was performed as part of the SABRE (Source Area BioREmediation) project, a public/private consortium whose charter is to determine if enhanced anaerobic bioremediation can result in effective and quantifiable treatment of chlorinated solvent DNAPL source areas. The modelling conducted has provided valuable insight into the complex interactions between processes in the evolving biogeochemical systems, particularly at the laboratory scale.
Spectral Induced Polarization approaches to characterize reactive transport parameters and processes
NASA Astrophysics Data System (ADS)
Schmutz, M.; Franceschi, M.; Revil, A.; Peruzzo, L.; Maury, T.; Vaudelet, P.; Ghorbani, A.; Hubbard, S. S.
2017-12-01
For almost a decade, geophysical methods have explored the potential for characterization of reactive transport parameters and processes relevant to hydrogeology, contaminant remediation, and oil and gas applications. Spectral Induced Polarization (SIP) methods show particular promise in this endeavour, given the sensitivity of the SIP signature to geological material electrical double layer properties and the critical role of the electrical double layer on reactive transport processes, such as adsorption. In this presentation, we discuss results from several recent studies that have been performed to quantify the value of SIP parameters for characterizing reactive transport parameters. The advances have been realized through performing experimental studies and interpreting their responses using theoretical and numerical approaches. We describe a series of controlled experimental studies that have been performed to quantify the SIP responses to variations in grain size and specific surface area, pore fluid geochemistry, and other factors. We also model chemical reactions at the interface fluid/matrix linked to part of our experimental data set. For some examples, both geochemical modelling and measurements are integrated into a SIP physico-chemical based model. Our studies indicate both the potential of and the opportunity for using SIP to estimate reactive transport parameters. In case of well sorted granulometry of the samples, we find that the grain size characterization (as well as the permeabililty for some specific examples) value can be estimated using SIP. We show that SIP is sensitive to physico-chemical conditions at the fluid/mineral interface, including the different pore fluid dissolved ions (Na+, Cu2+, Zn2+, Pb2+) due to their different adsorption behavior. We also showed the relevance of our approach to characterize the fluid/matrix interaction for various organic contents (wetting and non-wetting oils). We also discuss early efforts to jointly interpret SIP and other information for improved estimation, approaches to use SIP information to constrain mechanistic flow and transport models, and the potential to apply some of the approaches to field scale applications.
Zhang, Z. Fred; White, Signe K.; Bonneville, Alain; ...
2014-12-31
Numerical simulations have been used for estimating CO2 injectivity, CO2 plume extent, pressure distribution, and Area of Review (AoR), and for the design of CO2 injection operations and monitoring network for the FutureGen project. The simulation results are affected by uncertainties associated with numerous input parameters, the conceptual model, initial and boundary conditions, and factors related to injection operations. Furthermore, the uncertainties in the simulation results also vary in space and time. The key need is to identify those uncertainties that critically impact the simulation results and quantify their impacts. We introduce an approach to determine the local sensitivity coefficientmore » (LSC), defined as the response of the output in percent, to rank the importance of model inputs on outputs. The uncertainty of an input with higher sensitivity has larger impacts on the output. The LSC is scalable by the error of an input parameter. The composite sensitivity of an output to a subset of inputs can be calculated by summing the individual LSC values. We propose a local sensitivity coefficient method and applied it to the FutureGen 2.0 Site in Morgan County, Illinois, USA, to investigate the sensitivity of input parameters and initial conditions. The conceptual model for the site consists of 31 layers, each of which has a unique set of input parameters. The sensitivity of 11 parameters for each layer and 7 inputs as initial conditions is then investigated. For CO2 injectivity and plume size, about half of the uncertainty is due to only 4 or 5 of the 348 inputs and 3/4 of the uncertainty is due to about 15 of the inputs. The initial conditions and the properties of the injection layer and its neighbour layers contribute to most of the sensitivity. Overall, the simulation outputs are very sensitive to only a small fraction of the inputs. However, the parameters that are important for controlling CO2 injectivity are not the same as those controlling the plume size. The three most sensitive inputs for injectivity were the horizontal permeability of Mt Simon 11 (the injection layer), the initial fracture-pressure gradient, and the residual aqueous saturation of Mt Simon 11, while those for the plume area were the initial salt concentration, the initial pressure, and the initial fracture-pressure gradient. The advantages of requiring only a single set of simulation results, scalability to the proper parameter errors, and easy calculation of the composite sensitivities make this approach very cost-effective for estimating AoR uncertainty and guiding cost-effective site characterization, injection well design, and monitoring network design for CO2 storage projects.« less
NASA Astrophysics Data System (ADS)
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
Perfetti, Christopher M.; Rearden, Bradley T.
2016-03-01
The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2017-12-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.
Fieberg, J.; Jenkins, Kurt J.
2005-01-01
Often landmark conservation decisions are made despite an incomplete knowledge of system behavior and inexact predictions of how complex ecosystems will respond to management actions. For example, predicting the feasibility and likely effects of restoring top-level carnivores such as the gray wolf (Canis lupus) to North American wilderness areas is hampered by incomplete knowledge of the predator-prey system processes and properties. In such cases, global sensitivity measures, such as Sobola?? indices, allow one to quantify the effect of these uncertainties on model predictions. Sobola?? indices are calculated by decomposing the variance in model predictions (due to parameter uncertainty) into main effects of model parameters and their higher order interactions. Model parameters with large sensitivity indices can then be identified for further study in order to improve predictive capabilities. Here, we illustrate the use of Sobola?? sensitivity indices to examine the effect of parameter uncertainty on the predicted decline of elk (Cervus elaphus) population sizes following a hypothetical reintroduction of wolves to Olympic National Park, Washington, USA. The strength of density dependence acting on survival of adult elk and magnitude of predation were the most influential factors controlling elk population size following a simulated wolf reintroduction. In particular, the form of density dependence in natural survival rates and the per-capita predation rate together accounted for over 90% of variation in simulated elk population trends. Additional research on wolf predation rates on elk and natural compensations in prey populations is needed to reliably predict the outcome of predatora??prey system behavior following wolf reintroductions.
NASA Astrophysics Data System (ADS)
Thomas Steven Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten
2016-11-01
Where high-resolution topographic data are available, modelers are faced with the decision of whether it is better to spend computational resource on resolving topography at finer resolutions or on running more simulations to account for various uncertain input factors (e.g., model parameters). In this paper we apply global sensitivity analysis to explore how influential the choice of spatial resolution is when compared to uncertainties in the Manning's friction coefficient parameters, the inflow hydrograph, and those stemming from the coarsening of topographic data used to produce Digital Elevation Models (DEMs). We apply the hydraulic model LISFLOOD-FP to produce several temporally and spatially variable model outputs that represent different aspects of flood inundation processes, including flood extent, water depth, and time of inundation. We find that the most influential input factor for flood extent predictions changes during the flood event, starting with the inflow hydrograph during the rising limb before switching to the channel friction parameter during peak flood inundation, and finally to the floodplain friction parameter during the drying phase of the flood event. Spatial resolution and uncertainty introduced by resampling topographic data to coarser resolutions are much more important for water depth predictions, which are also sensitive to different input factors spatially and temporally. Our findings indicate that the sensitivity of LISFLOOD-FP predictions is more complex than previously thought. Consequently, the input factors that modelers should prioritize will differ depending on the model output assessed, and the location and time of when and where this output is most relevant.
Sankaran, Sethuraman; Humphrey, Jay D.; Marsden, Alison L.
2013-01-01
Computational models for vascular growth and remodeling (G&R) are used to predict the long-term response of vessels to changes in pressure, flow, and other mechanical loading conditions. Accurate predictions of these responses are essential for understanding numerous disease processes. Such models require reliable inputs of numerous parameters, including material properties and growth rates, which are often experimentally derived, and inherently uncertain. While earlier methods have used a brute force approach, systematic uncertainty quantification in G&R models promises to provide much better information. In this work, we introduce an efficient framework for uncertainty quantification and optimal parameter selection, and illustrate it via several examples. First, an adaptive sparse grid stochastic collocation scheme is implemented in an established G&R solver to quantify parameter sensitivities, and near-linear scaling with the number of parameters is demonstrated. This non-intrusive and parallelizable algorithm is compared with standard sampling algorithms such as Monte-Carlo. Second, we determine optimal arterial wall material properties by applying robust optimization. We couple the G&R simulator with an adaptive sparse grid collocation approach and a derivative-free optimization algorithm. We show that an artery can achieve optimal homeostatic conditions over a range of alterations in pressure and flow; robustness of the solution is enforced by including uncertainty in loading conditions in the objective function. We then show that homeostatic intramural and wall shear stress is maintained for a wide range of material properties, though the time it takes to achieve this state varies. We also show that the intramural stress is robust and lies within 5% of its mean value for realistic variability of the material parameters. We observe that prestretch of elastin and collagen are most critical to maintaining homeostasis, while values of the material properties are most critical in determining response time. Finally, we outline several challenges to the G&R community for future work. We suggest that these tools provide the first systematic and efficient framework to quantify uncertainties and optimally identify G&R model parameters. PMID:23626380
Characterizing hydrophobicity at the nanoscale: a molecular dynamics simulation study.
Bandyopadhyay, Dibyendu; Choudhury, Niharendu
2012-06-14
We use molecular dynamics (MD) simulations of water near nanoscopic surfaces to characterize hydrophobic solute-water interfaces. By using nanoscopic paraffin like plates as model solutes, MD simulations in isothermal-isobaric ensemble have been employed to identify characteristic features of such an interface. Enhanced water correlation, density fluctuations, and position dependent compressibility apart from surface specific hydrogen bond distribution and molecular orientations have been identified as characteristic features of such interfaces. Tetrahedral order parameter that quantifies the degree of tetrahedrality in the water structure and an orientational order parameter, which quantifies the orientational preferences of the second solvation shell water around a central water molecule, have also been calculated as a function of distance from the plate surface. In the vicinity of the surface these two order parameters too show considerable sensitivity to the surface hydrophobicity. The potential of mean force (PMF) between water and the surface as a function of the distance from the surface has also been analyzed in terms of direct interaction and induced contribution, which shows unusual effect of plate hydrophobicity on the solvent induced PMF. In order to investigate hydrophobic nature of these plates, we have also investigated interplate dewetting when two such plates are immersed in water.
Environmental Impact of Buildings--What Matters?
Heeren, Niko; Mutel, Christopher L; Steubing, Bernhard; Ostermeyer, York; Wallbaum, Holger; Hellweg, Stefanie
2015-08-18
The goal of this study was to identify drivers of environmental impact and quantify their influence on the environmental performance of wooden and massive residential and office buildings. We performed a life cycle assessment and used thermal simulation to quantify operational energy demand and to account for differences in thermal inertia of building mass. Twenty-eight input parameters, affecting operation, design, material, and exogenic building properties were sampled in a Monte Carlo analysis. To determine sensitivity, we calculated the correlation between each parameter and the resulting life cycle inventory and impact assessment scores. Parameters affecting operational energy demand and energy conversion are the most influential for the building's total environmental performance. For climate change, electricity mix, ventilation rate, heating system, and construction material rank the highest. Thermal inertia results in an average 2-6% difference in heat demand. Nonrenewable cumulative energy demand of wooden buildings is 18% lower, compared to a massive variant. Total cumulative energy demand is comparable. The median climate change impact is 25% lower, including end-of-life material credits and 22% lower, when credits are excluded. The findings are valid for small offices and residential buildings in Switzerland and regions with similar building culture, construction material production, and climate.
Good Models Gone Bad: Quantifying and Predicting Parameter-Induced Climate Model Simulation Failures
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Klein, R.; Tannahill, J.; Brandon, S.; Covey, C. C.; Domyancic, D.; Ivanova, D. P.
2012-12-01
Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Statistical analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation failures of the Parallel Ocean Program (POP2). About 8.5% of our POP2 runs failed for numerical reasons at certain combinations of parameter values. We apply support vector machine (SVM) classification from the fields of pattern recognition and machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. The SVM classifiers readily predict POP2 failures in an independent validation ensemble, and are subsequently used to determine the causes of the failures via a global sensitivity analysis. Four parameters related to ocean mixing and viscosity are identified as the major sources of POP2 failures. Our method can be used to improve the robustness of complex scientific models to parameter perturbations and to better steer UQ ensembles. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).
Appropriate use of the increment entropy for electrophysiological time series.
Liu, Xiaofeng; Wang, Xue; Zhou, Xu; Jiang, Aimin
2018-04-01
The increment entropy (IncrEn) is a new measure for quantifying the complexity of a time series. There are three critical parameters in the IncrEn calculation: N (length of the time series), m (dimensionality), and q (quantifying precision). However, the question of how to choose the most appropriate combination of IncrEn parameters for short datasets has not been extensively explored. The purpose of this research was to provide guidance on choosing suitable IncrEn parameters for short datasets by exploring the effects of varying the parameter values. We used simulated data, epileptic EEG data and cardiac interbeat (RR) data to investigate the effects of the parameters on the calculated IncrEn values. The results reveal that IncrEn is sensitive to changes in m, q and N for short datasets (N≤500). However, IncrEn reaches stability at a data length of N=1000 with m=2 and q=2, and for short datasets (N=100), it shows better relative consistency with 2≤m≤6 and 2≤q≤8 We suggest that the value of N should be no less than 100. To enable a clear distinction between different classes based on IncrEn, we recommend that m and q should take values between 2 and 4. With appropriate parameters, IncrEn enables the effective detection of complexity variations in physiological time series, suggesting that IncrEn should be useful for the analysis of physiological time series in clinical applications. Copyright © 2018 Elsevier Ltd. All rights reserved.
Quantification of uncertainties in the performance of smart composite structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1993-01-01
A composite wing with spars, bulkheads, and built-in control devices is evaluated using a method for the probabilistic assessment of smart composite structures. Structural responses (such as change in angle of attack, vertical displacements, and stresses in regular plies with traditional materials and in control plies with mixed traditional and actuation materials) are probabilistically assessed to quantify their respective scatter. Probabilistic sensitivity factors are computed to identify those parameters that have a significant influence on a specific structural response. Results show that the uncertainties in the responses of smart composite structures can be quantified. Responses such as structural deformation, ply stresses, frequencies, and buckling loads in the presence of defects can be reliably controlled to satisfy specified design requirements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
Gailani, Joseph Z; Lackey, Tahirih C; King, David B; Bryant, Duncan; Kim, Sung-Chan; Shafer, Deborah J
2016-03-01
Model studies were conducted to investigate the potential coral reef sediment exposure from dredging associated with proposed development of a deepwater wharf in Apra Harbor, Guam. The Particle Tracking Model (PTM) was applied to quantify the exposure of coral reefs to material suspended by the dredging operations at two alternative sites. Key PTM features include the flexible capability of continuous multiple releases of sediment parcels, control of parcel/substrate interaction, and the ability to efficiently track vast numbers of parcels. This flexibility has facilitated simulating the combined effects of sediment released from clamshell dredging and chiseling within Apra Harbor. Because the rate of material released into the water column by some of the processes is not well understood or known a priori, the modeling approach was to bracket parameters within reasonable ranges to produce a suite of potential results from multiple model runs. Sensitivity analysis to model parameters is used to select the appropriate parameter values for bracketing. Data analysis results include mapping the time series and the maximum values of sedimentation, suspended sediment concentration, and deposition rate. Data were used to quantify various exposure processes that affect coral species in Apra Harbor. The goal of this research is to develop a robust methodology for quantifying and bracketing exposure mechanisms to coral (or other receptors) from dredging operations. These exposure values were utilized in an ecological assessment to predict effects (coral reef impacts) from various dredging scenarios. Copyright © 2015. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Alhossen, I.; Villeneuve-Faure, C.; Baudoin, F.; Bugarin, F.; Segonds, S.
2017-01-01
Previous studies have demonstrated that the electrostatic force distance curve (EFDC) is a relevant way of probing injected charge in 3D. However, the EFDC needs a thorough investigation to be accurately analyzed and to provide information about charge localization. Interpreting the EFDC in terms of charge distribution is not straightforward from an experimental point of view. In this paper, a sensitivity analysis of the EFDC is studied using buried electrodes as a first approximation. In particular, the influence of input factors such as the electrode width, depth and applied potential are investigated. To reach this goal, the EFDC is fitted to a law described by four parameters, called logistic law, and the influence of the electrode parameters on the law parameters has been investigated. Then, two methods are applied—Sobol’s method and the factorial design of experiment—to quantify the effect of each factor on each parameter of the logistic law. Complementary results are obtained from both methods, demonstrating that the EFDC is not the result of the superposition of the contribution of each electrode parameter, but that it exhibits a strong contribution from electrode parameter interaction. Furthermore, thanks to these results, a matricial model has been developed to predict EFDCs for any combination of electrode characteristics. A good correlation is observed with the experiments, and this is promising for charge investigation using an EFDC.
Pillai, Nikhil; Craig, Morgan; Dokoumetzidis, Aristeidis; Schwartz, Sorell L; Bies, Robert; Freedman, Immanuel
2018-06-19
In mathematical pharmacology, models are constructed to confer a robust method for optimizing treatment. The predictive capability of pharmacological models depends heavily on the ability to track the system and to accurately determine parameters with reference to the sensitivity in projected outcomes. To closely track chaotic systems, one may choose to apply chaos synchronization. An advantageous byproduct of this methodology is the ability to quantify model parameters. In this paper, we illustrate the use of chaos synchronization combined with Nelder-Mead search to estimate parameters of the well-known Kirschner-Panetta model of IL-2 immunotherapy from noisy data. Chaos synchronization with Nelder-Mead search is shown to provide more accurate and reliable estimates than Nelder-Mead search based on an extended least squares (ELS) objective function. Our results underline the strength of this approach to parameter estimation and provide a broader framework of parameter identification for nonlinear models in pharmacology. Copyright © 2018 Elsevier Ltd. All rights reserved.
The value of compressed air energy storage in energy and reserve markets
Drury, Easan; Denholm, Paul; Sioshansi, Ramteen
2011-06-28
Storage devices can provide several grid services, however it is challenging to quantify the value of providing several services and to optimally allocate storage resources to maximize value. We develop a co-optimized Compressed Air Energy Storage (CAES) dispatch model to characterize the value of providing operating reserves in addition to energy arbitrage in several U.S. markets. We use the model to: (1) quantify the added value of providing operating reserves in addition to energy arbitrage; (2) evaluate the dynamic nature of optimally allocating storage resources into energy and reserve markets; and (3) quantify the sensitivity of CAES net revenues tomore » several design and performance parameters. We find that conventional CAES systems could earn an additional 23 ± 10/kW-yr by providing operating reserves, and adiabatic CAES systems could earn an additional 28 ± 13/kW-yr. We find that arbitrage-only revenues are unlikely to support a CAES investment in most market locations, but the addition of reserve revenues could support a conventional CAES investment in several markets. Adiabatic CAES revenues are not likely to support an investment in most regions studied. As a result, modifying CAES design and performance parameters primarily impacts arbitrage revenues, and optimizing CAES design will be nearly independent of dispatch strategy.« less
Protein-bound NAD(P)H Lifetime is Sensitive to Multiple Fates of Glucose Carbon.
Sharick, Joe T; Favreau, Peter F; Gillette, Amani A; Sdao, Sophia M; Merrins, Matthew J; Skala, Melissa C
2018-04-03
While NAD(P)H fluorescence lifetime imaging (FLIM) can detect changes in flux through the TCA cycle and electron transport chain (ETC), it remains unclear whether NAD(P)H FLIM is sensitive to other potential fates of glucose. Glucose carbon can be diverted from mitochondria by the pentose phosphate pathway (via glucose 6-phosphate dehydrogenase, G6PDH), lactate production (via lactate dehydrogenase, LDH), and rejection of carbon from the TCA cycle (via pyruvate dehydrogenase kinase, PDK), all of which can be upregulated in cancer cells. Here, we demonstrate that multiphoton NAD(P)H FLIM can be used to quantify the relative concentrations of recombinant LDH and malate dehydrogenase (MDH) in solution. In multiple epithelial cell lines, NAD(P)H FLIM was also sensitive to inhibition of LDH and PDK, as well as the directionality of LDH in cells forced to use pyruvate versus lactate as fuel sources. Among the parameters measurable by FLIM, only the lifetime of protein-bound NAD(P)H (τ 2 ) was sensitive to these changes, in contrast to the optical redox ratio, mean NAD(P)H lifetime, free NAD(P)H lifetime, or the relative amount of free and protein-bound NAD(P)H. NAD(P)H τ 2 offers the ability to non-invasively quantify diversions of carbon away from the TCA cycle/ETC, which may support mechanisms of drug resistance.
Calibrating Physical Parameters in House Models Using Aggregate AC Power Demand
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Stevens, Andrew J.; Lian, Jianming
For residential houses, the air conditioning (AC) units are one of the major resources that can provide significant flexibility in energy use for the purpose of demand response. To quantify the flexibility, the characteristics of all the houses need to be accurately estimated, so that certain house models can be used to predict the dynamics of the house temperatures in order to adjust the setpoints accordingly to provide demand response while maintaining the same comfort levels. In this paper, we propose an approach using the Reverse Monte Carlo modeling method and aggregate house models to calibrate the distribution parameters ofmore » the house models for a population of residential houses. Given the aggregate AC power demand for the population, the approach can successfully estimate the distribution parameters for the sensitive physical parameters based on our previous uncertainty quantification study, such as the mean of the floor areas of the houses.« less
Toward Scientific Numerical Modeling
NASA Technical Reports Server (NTRS)
Kleb, Bil
2007-01-01
Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.
NASA Technical Reports Server (NTRS)
Campbell, B. H.
1974-01-01
A study is described which was initiated to identify and quantify the interrelationships between and within the performance, safety, cost, and schedule parameters for unmanned, automated payload programs. The result of the investigation was a systems cost/performance model which was implemented as a digital computer program and could be used to perform initial program planning, cost/performance tradeoffs, and sensitivity analyses for mission model and advanced payload studies. Program objectives and results are described briefly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaw, Guinevere C.; Bannister, Mark E.; Biewer, Theodore M.
Laser-induced breakdown spectroscopy (LIBS) results are presented that provide depth-resolved identification of He implanted in polycrystalline tungsten (PC-W) targets by a 200 keV He+ ion beam, with a surface temperature of approximately 900 °C and a peak fluence of 10 23 m –2. He retention, and the influence of He on deuterium and tritium recycling, permeation, and retention in PC-W plasma facing components are important questions for the divertor and plasma facing components in a fusion reactor, yet are difficult to quantify. The purpose of this work is to demonstrate the ability of LIBS to identify helium in tungsten; tomore » investigate the sensitivity of laser parameters including, laser energy and gate delay, that directly influence the sensitivity and depth resolution of LIBS; and to perform a proof-of-principle experiment using LIBS to measure relative He intensities as a function of depth. In conclusion, the results presented demonstrate the potential not only to identify helium but also to develop a methodology to quantify gaseous impurity concentration in PC-W as a function of depth.« less
The detection of He in tungsten following ion implantation by laser-induced breakdown spectroscopy
NASA Astrophysics Data System (ADS)
Shaw, G.; Bannister, M.; Biewer, T. M.; Martin, M. Z.; Meyer, F.; Wirth, B. D.
2018-01-01
Laser-induced breakdown spectroscopy (LIBS) results are presented that provide depth-resolved identification of He implanted in polycrystalline tungsten (PC-W) targets by a 200 keV He+ ion beam, with a surface temperature of approximately 900 °C and a peak fluence of 1023 m-2. He retention, and the influence of He on deuterium and tritium recycling, permeation, and retention in PC-W plasma facing components are important questions for the divertor and plasma facing components in a fusion reactor, yet are difficult to quantify. The purpose of this work is to demonstrate the ability of LIBS to identify helium in tungsten; to investigate the sensitivity of laser parameters including, laser energy and gate delay, that directly influence the sensitivity and depth resolution of LIBS; and to perform a proof-of-principle experiment using LIBS to measure relative He intensities as a function of depth. The results presented demonstrate the potential not only to identify helium but also to develop a methodology to quantify gaseous impurity concentration in PC-W as a function of depth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yue, Qing; Kahn, Brian; Xiao, Heng
2013-08-16
Cloud top entrainment instability (CTEI) is a hypothesized positive feedback between entrainment mixing and evaporative cooling near the cloud top. Previous theoretical and numerical modeling studies have shown that the persistence or breakup of marine boundary layer (MBL) clouds may be sensitive to the CTEI parameter. Collocated thermodynamic profile and cloud observations obtained from the Atmospheric Infrared Sounder (AIRS) and Moderate Resolution Imaging Spectroradiometer (MODIS) instruments are used to quantify the relationship between the CTEI parameter and the cloud-topped MBL transition from stratocumulus to trade cumulus in the northeastern Pacific Ocean. Results derived from AIRS and MODIS are compared withmore » numerical results from the UCLA large eddy simulation (LES) model for both well-mixed and decoupled MBLs. The satellite and model results both demonstrate a clear correlation between the CTEI parameter and MBL cloud fraction. Despite fundamental differences between LES steady state results and the instantaneous snapshot type of observations from satellites, significant correlations for both the instantaneous pixel-scale observations and the long-term averaged spatial patterns between the CTEI parameter and MBL cloud fraction are found from the satellite observations and are consistent with LES results. This suggests the potential of using AIRS and MODIS to quantify global and temporal characteristics of the cloud-topped MBL transition.« less
Sun, Y.; Tong, C.; Trainor-Guitten, W. J.; ...
2012-12-20
The risk of CO 2 leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO 2 geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO 2 is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO 2/brine saturation are connected to the fault-leakage model as amore » boundary condition. CO 2 and brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO 2 plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO 2 and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.« less
Cornea nerve fiber quantification and construction of phenotypes in patients with fibromyalgia
Oudejans, Linda; He, Xuan; Niesters, Marieke; Dahan, Albert; Brines, Michael; van Velzen, Monique
2016-01-01
Cornea confocal microscopy (CCM) is a novel non-invasive method to detect small nerve fiber pathology. CCM generally correlates with outcomes of skin biopsies in patients with small fiber pathology. The aim of this study was to quantify the morphology of small nerve fibers of the cornea of patients with fibromyalgia in terms of density, length and branching and further phenotype these patients using standardized quantitative sensory testing (QST). Small fiber pathology was detected in the cornea of 51% of patients: nerve fiber length was significantly decreased in 44% of patients compared to age- and sex-matched reference values; nerve fiber density and branching were significantly decreased in 10% and 28% of patients. The combination of the CCM parameters and sensory tests for central sensitization, (cold pain threshold, mechanical pain threshold, mechanical pain sensitivity, allodynia and/or windup), yielded four phenotypes of fibromyalgia patients in a subgroup analysis: one group with normal cornea morphology without and with signs of central sensitization, and a group with abnormal cornea morphology parameters without and with signs of central sensitization. In conclusion, half of the tested fibromyalgia population demonstrates signs of small fiber pathology as measured by CCM. The four distinct phenotypes suggest possible differences in disease mechanisms and may require different treatment approaches. PMID:27006259
Quantifying Drosophila food intake: comparative analysis of current methodology
Deshpande, Sonali A.; Carvalho, Gil B.; Amador, Ariadna; Phillips, Angela M.; Hoxha, Sany; Lizotte, Keith J.; Ja, William W.
2014-01-01
Food intake is a fundamental parameter in animal studies. Despite the prevalent use of Drosophila in laboratory research, precise measurements of food intake remain challenging in this model organism. Here, we compare several common Drosophila feeding assays: the Capillary Feeder (CAFE), food-labeling with a radioactive tracer or a colorimetric dye, and observations of proboscis extension (PE). We show that the CAFE and radioisotope-labeling provide the most consistent results, have the highest sensitivity, and can resolve differences in feeding that dye-labeling and PE fail to distinguish. We conclude that performing the radiolabeling and CAFE assays in parallel is currently the best approach for quantifying Drosophila food intake. Understanding the strengths and limitations of food intake methodology will greatly advance Drosophila studies of nutrition, behavior, and disease. PMID:24681694
Characterizing Graphene-modified Electrodes for Interfacing with Arduino®-based Devices.
Arris, Farrah Aida; Ithnin, Mohamad Hafiz; Salim, Wan Wardatul Amani Wan
2016-08-01
Portable low-cost platform and sensing systems for identification and quantitative measurement are in high demand for various environmental monitoring applications, especially in field work. Quantifying parameters in the field requires both minimal sample handling and a device capable of performing measurements with high sensitivity and stability. Furthermore, the one-device-fits-all concept is useful for continuous monitoring of multiple parameters. Miniaturization of devices can be achieved by introducing graphene as part of the transducer in an electrochemical sensor. In this project, we characterize graphene deposition methods on glassy-carbon electrodes (GCEs) with the goal of interfacing with an Arduino-based user-friendly microcontroller. We found that a galvanostatic electrochemical method yields the highest peak current of 10 mA, promising a highly sensitive electrochemical sensor. An Atlas Scientific™ printed circuit board (PCB) was connected to an Arduino® microcontroller using a multi-circuit connection that can be interfaced with graphene-based electrochemical sensors for environmental monitoring.
Li, Liang; Wang, Yiying; Xu, Jiting; Flora, Joseph R V; Hoque, Shamia; Berge, Nicole D
2018-08-01
Hydrothermal carbonization (HTC) is a wet, low temperature thermal conversion process that continues to gain attention for the generation of hydrochar. The importance of specific process conditions and feedstock properties on hydrochar characteristics is not well understood. To evaluate this, linear and non-linear models were developed to describe hydrochar characteristics based on data collected from HTC-related literature. A Sobol analysis was subsequently conducted to identify parameters that most influence hydrochar characteristics. Results from this analysis indicate that for each investigated hydrochar property, the model fit and predictive capability associated with the random forest models is superior to both the linear and regression tree models. Based on results from the Sobol analysis, the feedstock properties and process conditions most influential on hydrochar yield, carbon content, and energy content were identified. In addition, a variational process parameter sensitivity analysis was conducted to determine how feedstock property importance changes with process conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Cabrera, Marco
2000-01-01
An acute reduction in oxygen delivery to skeletal muscle is generally associated with profound derangements in substrate metabolism. Given the complexity of the human bioenergetic system and its components, it is difficult to quantify the interaction of cellular metabolic processes to maintain ATP homeostasis during stress (e.g., hypoxia, ischemia, and exercise). Of special interest is the determination of mechanisms relating tissue oxygenation to observed metabolic responses at the tissue, organ, and whole body levels and the quantification of how changes in oxygen availability affect the pathways of ATP synthesis and their regulation. In this study, we apply a previously developed mathematical model of human bioenergetics to study effects of ischemia during periods of increased ATP turnover (e.g., exercise). By using systematic sensitivity analysis the oxidative phosphorylation rate was found to be the most important rate parameter affecting lactate production during ischemia under resting conditions. Here we examine whether mild exercise under ischemic conditions alters the relative importance of pathways and parameters previously obtained.
A Bayesian Hierarchical Modeling Approach to Predicting Flow in Ungauged Basins
NASA Astrophysics Data System (ADS)
Gronewold, A.; Alameddine, I.; Anderson, R. M.
2009-12-01
Recent innovative approaches to identifying and applying regression-based relationships between land use patterns (such as increasing impervious surface area and decreasing vegetative cover) and rainfall-runoff model parameters represent novel and promising improvements to predicting flow from ungauged basins. In particular, these approaches allow for predicting flows under uncertain and potentially variable future conditions due to rapid land cover changes, variable climate conditions, and other factors. Despite the broad range of literature on estimating rainfall-runoff model parameters, however, the absence of a robust set of modeling tools for identifying and quantifying uncertainties in (and correlation between) rainfall-runoff model parameters represents a significant gap in current hydrological modeling research. Here, we build upon a series of recent publications promoting novel Bayesian and probabilistic modeling strategies for quantifying rainfall-runoff model parameter estimation uncertainty. Our approach applies alternative measures of rainfall-runoff model parameter joint likelihood (including Nash-Sutcliffe efficiency, among others) to simulate samples from the joint parameter posterior probability density function. We then use these correlated samples as response variables in a Bayesian hierarchical model with land use coverage data as predictor variables in order to develop a robust land use-based tool for forecasting flow in ungauged basins while accounting for, and explicitly acknowledging, parameter estimation uncertainty. We apply this modeling strategy to low-relief coastal watersheds of Eastern North Carolina, an area representative of coastal resource waters throughout the world because of its sensitive embayments and because of the abundant (but currently threatened) natural resources it hosts. Consequently, this area is the subject of several ongoing studies and large-scale planning initiatives, including those conducted through the United States Environmental Protection Agency (USEPA) total maximum daily load (TMDL) program, as well as those addressing coastal population dynamics and sea level rise. Our approach has several advantages, including the propagation of parameter uncertainty through a nonparametric probability distribution which avoids common pitfalls of fitting parameters and model error structure to a predetermined parametric distribution function. In addition, by explicitly acknowledging correlation between model parameters (and reflecting those correlations in our predictive model) our model yields relatively efficient prediction intervals (unlike those in the current literature which are often unnecessarily large, and may lead to overly-conservative management actions). Finally, our model helps improve understanding of the rainfall-runoff process by identifying model parameters (and associated catchment attributes) which are most sensitive to current and future land use change patterns. Disclaimer: Although this work was reviewed by EPA and approved for publication, it may not necessarily reflect official Agency policy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Ronald J.; Reilly, Timothy J.; Lopez, Anthony
2015-09-15
Highlights: • A spreadsheet-based risk screening tool for groundwater affected by landfills is presented. • Domenico solute transport equations are used to estimate downgradient contaminant concentrations. • Landfills are categorized as presenting high, moderate or low risks. • Analysis of parameter sensitivity and examples of the method’s application are given. • The method has value to regulators and those considering redeveloping closed landfills. - Abstract: A screening tool for quantifying levels of concern for contaminants detected in monitoring wells on or near landfills to down-gradient receptors (streams, wetlands and residential lots) was developed and evaluated. The tool uses Quick Domenicomore » Multi-scenario (QDM), a spreadsheet implementation of Domenico-based solute transport, to estimate concentrations of contaminants reaching receptors under steady-state conditions from a constant-strength source. Unlike most other available Domenico-based model applications, QDM calculates the time for down-gradient contaminant concentrations to approach steady state and appropriate dispersivity values, and allows for up to fifty simulations on a single spreadsheet. Sensitivity of QDM solutions to critical model parameters was quantified. The screening tool uses QDM results to categorize landfills as having high, moderate and low levels of concern, based on contaminant concentrations reaching receptors relative to regulatory concentrations. The application of this tool was demonstrated by assessing levels of concern (as defined by the New Jersey Pinelands Commission) for thirty closed, uncapped landfills in the New Jersey Pinelands National Reserve, using historic water-quality data from monitoring wells on and near landfills and hydraulic parameters from regional flow models. Twelve of these landfills are categorized as having high levels of concern, indicating a need for further assessment. This tool is not a replacement for conventional numerically-based transport model or other available Domenico-based applications, but is suitable for quickly assessing the level of concern posed by a landfill or other contaminant point source before expensive and lengthy monitoring or remediation measures are taken. In addition to quantifying the level of concern using historic groundwater-monitoring data, the tool allows for archiving model scenarios and adding refinements as new data become available.« less
Xu, Mengchen; Lerner, Amy L; Funkenbusch, Paul D; Richhariya, Ashutosh; Yoon, Geunyoung
2018-02-01
The optical performance of the human cornea under intraocular pressure (IOP) is the result of complex material properties and their interactions. The measurement of the numerous material parameters that define this material behavior may be key in the refinement of patient-specific models. The goal of this study was to investigate the relative contribution of these parameters to the biomechanical and optical responses of human cornea predicted by a widely accepted anisotropic hyperelastic finite element model, with regional variations in the alignment of fibers. Design of experiments methods were used to quantify the relative importance of material properties including matrix stiffness, fiber stiffness, fiber nonlinearity and fiber dispersion under physiological IOP. Our sensitivity results showed that corneal apical displacement was influenced nearly evenly by matrix stiffness, fiber stiffness and nonlinearity. However, the variations in corneal optical aberrations (refractive power and spherical aberration) were primarily dependent on the value of the matrix stiffness. The optical aberrations predicted by variations in this material parameter were sufficiently large to predict clinically important changes in retinal image quality. Therefore, well-characterized individual variations in matrix stiffness could be critical in cornea modeling in order to reliably predict optical behavior under different IOPs or after corneal surgery.
NASA Astrophysics Data System (ADS)
Massanelli, J.; Meadows-McDonnell, M.; Konzelman, C.; Moon, J. B.; Kumar, A.; Thomas, J.; Pereira, A.; Naithani, K. J.
2016-12-01
Meeting agricultural water demands is becoming progressively difficult due to population growth and changes in climate. Breeding stress-resilient crops is a viable solution, as information about genetic variation and their role in stress tolerance is becoming available due to advancement in technology. In this study we screened eight diverse rice genotypes for photosynthetic capacity under greenhouse conditions. These include the Asian rice (Oryza sativa) genotypes, drought sensitive Nipponbare, and a transgenic line overexpressing the HYR gene in Nipponbare; six genotypes (Vandana, Bengal, Nagina-22, Glaberrima, Kaybonnet, Ai Chueh Ta Pai Ku) and an African rice O. glaberrima, all selected for varying levels of drought tolerance. We collected CO2 and light response curve data under well-watered and simulated drought conditions in greenhouse. From these curves we estimated photosynthesis model parameters, such as the maximum carboxylation rate (Vcmax), the maximum electron transport rate (Jmax), the maximum gross photosynthesis rate, daytime respiration (Rd), and quantum yield (f). Our results suggest that O. glaberrima and Nipponbare were the most sensitive to drought because Vcmax and Pgmax declined under drought conditions; other drought tolerant genotypes did not show significant changes in these model parameters. Our integrated approach, combining genetic information and photosynthesis modeling, shows promise to quantify drought response parameters and improve crop yield under drought stress conditions.
Schliesser, Joshua A; Gallimore, Gary; Kunjukunju, Nancy; Sabates, Nelson R; Koulen, Peter; Sabates, Felix N
2014-01-01
While identifying functional and structural parameters of the retina in central serous chorioretinopathy (CSCR) patients, this study investigated how an optical coherence tomography (OCT)-based diagnosis can be significantly supplemented with functional diagnostic tools and to what degree the determination of disease severity and therapy outcome can benefit from diagnostics complementary to OCT. CSCR patients were evaluated prospectively with microperimetry (MP) and spectral domain optical coherence tomography (SD-OCT) to determine retinal sensitivity function and retinal thickness as outcome measures along with measures of visual acuity (VA). Patients received clinical care that involved focal laser photocoagulation or pharmacotherapy targeting inflammation and neovascularization. Correlation of clinical parameters with a focus on functional parameters, VA, and mean retinal sensitivity, as well as on the structural parameter mean retinal thickness, showed that functional measures were similar in diagnostic power. A moderate correlation was found between OCT data and the standard functional assessment of VA; however, a strong correlation between OCT and MP data showed that diagnostic measures cannot always be used interchangeably, but that complementary use is of higher clinical value. The study indicates that integrating SD-OCT with MP provides a more complete diagnosis with high clinical relevance for complex, difficult to quantify diseases such as CSCR.
Eandi, Chiara M; Piccolino, Felice Cardillo; Alovisi, Camilla; Tridico, Federico; Giacomello, Daniela; Grignolo, Federico M
2015-04-01
To find possible correlations between the morphologic macular changes revealed by fundus autofluorescence (FAF) and the functional parameters such as visual acuity and retinal sensitivity in patients with chronic central serous chorioretinopathy (CSC). Prospective, cross-sectional study. Forty-six eyes (39 consecutive patients) with chronic CSC were studied with FAF and microperimetry (MP). Retinal sensitivity value maps were exactly superimposed over FAF images. The following microperimetric parameters were applied: central 10-degree visual field, 4-2-1 strategy, 61 stimulation spots, white monochromatic background, stimulation time 200 ms, stimulation spot size Goldmann III. A possible relationship between MP and FAF was investigated. Mean best-corrected visual acuity (BCVA) was 20/32 (median 20/25, range 20/20-20/200). BCVA was significantly correlated with FAF findings (Mann-Whitney test; P < .0001). A positive concordance between FAF and MP evaluation was also found (total concordance of 0.720 with a kappa of Cohen of 0.456). The hypo-autofluorescent areas showed decreased retinal sensitivity, while adjacent areas of increased FAF could be associated to both normal and decreased retinal sensitivity. Absolute scotoma, defined as 0 dB retinal sensitivity, corresponded with absence of autofluorescence. Altered FAF in chronic CSC patients has a functional correlation quantified by microperimetry. This study confirms the impact of FAF changes on retinal sensitivity and their value to reflect the functional impairment in chronic CSC. Copyright © 2015 Elsevier Inc. All rights reserved.
PDF investigations of turbulent non-premixed jet flames with thin reaction zones
NASA Astrophysics Data System (ADS)
Wang, Haifeng; Pope, Stephen
2012-11-01
PDF (probability density function) modeling studies are carried out for the Sydney piloted jet flames. These Sydney flames feature much thinner reaction zones in the mixture fraction space compared to those in the well-studied Sandia piloted jet flames. The performance of the different turbulent combustion models in the Sydney flames with thin reaction zones has not been examined extensively before, and this work aims at evaluating the capability of the PDF method to represent the thin turbulent flame structures in the Sydney piloted flames. Parametric and sensitivity PDF studies are performed with respect to the different models and model parameters. A global error parameter is defined to quantify the departure of the simulation results from the experimental data, and is used to assess the performance of the different set of models and model parameters.
Interpreting atom probe data from chromium oxide scales.
La Fontaine, Alexandre; Gault, Baptiste; Breen, Andrew; Stephenson, Leigh; Ceguerra, Anna V; Yang, Limei; Nguyen, Thuan Dinh; Zhang, Jianqiang; Young, David J; Cairney, Julie M
2015-12-01
Picosecond-pulsed ultraviolet-laser (UV-355 nm) assisted atom probe tomography (APT) was used to analyze protective, thermally grown chromium oxides formed on stainless steel. The influence of analysis parameters on the thermal tail observed in the mass spectra and the chemical composition is investigated. A new parameter termed "laser sensitivity factor" is introduced in order to quantify the effect of laser energy on the extent of the thermal tail. This parameter is used to compare the effect of increasing laser energy on thermal tails in chromia and chromite samples. Also explored is the effect of increasing laser energy on the measured oxygen content and the effect of specimen base temperature and laser pulse frequency on the mass spectrum. Finally, we report a preliminary analysis of molecular ion dissociations in chromia. Copyright © 2015 Elsevier B.V. All rights reserved.
Recuerda, Maximilien; Périé, Delphine; Gilbert, Guillaume; Beaudoin, Gilles
2012-10-12
The treatment planning of spine pathologies requires information on the rigidity and permeability of the intervertebral discs (IVDs). Magnetic resonance imaging (MRI) offers great potential as a sensitive and non-invasive technique for describing the mechanical properties of IVDs. However, the literature reported small correlation coefficients between mechanical properties and MRI parameters. Our hypothesis is that the compressive modulus and the permeability of the IVD can be predicted by a linear combination of MRI parameters. Sixty IVDs were harvested from bovine tails, and randomly separated in four groups (in-situ, digested-6h, digested-18h, digested-24h). Multi-parametric MRI acquisitions were used to quantify the relaxation times T1 and T2, the magnetization transfer ratio MTR, the apparent diffusion coefficient ADC and the fractional anisotropy FA. Unconfined compression, confined compression and direct permeability measurements were performed to quantify the compressive moduli and the hydraulic permeabilities. Differences between groups were evaluated from a one way ANOVA. Multi linear regressions were performed between dependent mechanical properties and independent MRI parameters to verify our hypothesis. A principal component analysis was used to convert the set of possibly correlated variables into a set of linearly uncorrelated variables. Agglomerative Hierarchical Clustering was performed on the 3 principal components. Multilinear regressions showed that 45 to 80% of the Young's modulus E, the aggregate modulus in absence of deformation HA0, the radial permeability kr and the axial permeability in absence of deformation k0 can be explained by the MRI parameters within both the nucleus pulposus and the annulus pulposus. The principal component analysis reduced our variables to two principal components with a cumulative variability of 52-65%, which increased to 70-82% when considering the third principal component. The dendograms showed a natural division into four clusters for the nucleus pulposus and into three or four clusters for the annulus fibrosus. The compressive moduli and the permeabilities of isolated IVDs can be assessed mostly by MT and diffusion sequences. However, the relationships have to be improved with the inclusion of MRI parameters more sensitive to IVD degeneration. Before the use of this technique to quantify the mechanical properties of IVDs in vivo on patients suffering from various diseases, the relationships have to be defined for each degeneration state of the tissue that mimics the pathology. Our MRI protocol associated to principal component analysis and agglomerative hierarchical clustering are promising tools to classify the degenerated intervertebral discs and further find biomarkers and predictive factors of the evolution of the pathologies.
2012-01-01
Background Artificial neural networks (ANNs) are widely studied for evaluating diseases. This paper discusses the intelligence mode of an ANN in grading the diagnosis of liver fibrosis by duplex ultrasonogaphy. Methods 239 patients who were confirmed as having liver fibrosis or cirrhosis by ultrasound guided liver biopsy were investigated in this study. We quantified ultrasonographic parameters as significant parameters using a data optimization procedure applied to an ANN. 179 patients were typed at random as the training group; 60 additional patients were consequently enrolled as the validating group. Performance of the ANN was evaluated according to accuracy, sensitivity, specificity, Youden’s index and receiver operating characteristic (ROC) analysis. Results 5 ultrasonographic parameters; i.e., the liver parenchyma, thickness of spleen, hepatic vein (HV) waveform, hepatic artery pulsatile index (HAPI) and HV damping index (HVDI), were enrolled as the input neurons in the ANN model. The sensitivity, specificity and accuracy of the ANN model for quantitative diagnosis of liver fibrosis were 95.0%, 85.0% and 88.3%, respectively. The Youden’s index (YI) was 0.80. Conclusions The established ANN model had good sensitivity and specificity in quantitative diagnosis of hepatic fibrosis or liver cirrhosis. Our study suggests that the ANN model based on duplex ultrasound may help non-invasive grading diagnosis of liver fibrosis in clinical practice. PMID:22716936
Synchronous monitoring of muscle dynamics and electromyogram
NASA Astrophysics Data System (ADS)
Zakir Hossain, M.; Grill, Wolfgang
2011-04-01
A non-intrusive novel detection scheme has been implemented to detect the lateral muscle extension, force of the skeletal muscle and the motor action potential (EMG) synchronously. This allows the comparison of muscle dynamics and EMG signals as a basis for modeling and further studies to determine which architectural parameters are most sensitive to changes in muscle activity. For this purpose the transmission time for ultrasonic chirp signal in the frequency range of 100 kHz to 2.5 MHz passing through the muscle under observation and respective motor action potentials are recorded synchronously to monitor and quantify biomechanical parameters related to muscle performance. Additionally an ultrasonic force sensor has been employed for monitoring. Ultrasonic traducers are placed on the skin to monitor muscle expansion. Surface electrodes are placed suitably to pick up the potential for activation of the monitored muscle. Isometric contraction of the monitored muscle is ensured by restricting the joint motion with the ultrasonic force sensor. Synchronous monitoring was initiated by a software activated audio beep starting at zero time of the subsequent data acquisition interval. Computer controlled electronics are used to generate and detect the ultrasonic signals and monitor the EMG signals. Custom developed software and data analysis is employed to analyze and quantify the monitored data. Reaction time, nerve conduction speed, latent period between the on-set of EMG signals and muscle response, degree of muscle activation and muscle fatigue development, rate of energy expenditure and motor neuron recruitment rate in isometric contraction, and other relevant parameters relating to muscle performance have been quantified with high spatial and temporal resolution.
Ice phase in altocumulus clouds over Leipzig: remote sensing observations and detailed modeling
NASA Astrophysics Data System (ADS)
Simmel, M.; Bühl, J.; Ansmann, A.; Tegen, I.
2015-09-01
The present work combines remote sensing observations and detailed cloud modeling to investigate two altocumulus cloud cases observed over Leipzig, Germany. A suite of remote sensing instruments was able to detect primary ice at rather high temperatures of -6 °C. For comparison, a second mixed phase case at about -25 °C is introduced. To further look into the details of cloud microphysical processes, a simple dynamics model of the Asai-Kasahara (AK) type is combined with detailed spectral microphysics (SPECS) forming the model system AK-SPECS. Vertical velocities are prescribed to force the dynamics, as well as main cloud features, to be close to the observations. Subsequently, sensitivity studies with respect to ice microphysical parameters are carried out with the aim to quantify the most important sensitivities for the cases investigated. For the cases selected, the liquid phase is mainly determined by the model dynamics (location and strength of vertical velocity), whereas the ice phase is much more sensitive to the microphysical parameters (ice nucleating particle (INP) number, ice particle shape). The choice of ice particle shape may induce large uncertainties that are on the same order as those for the temperature-dependent INP number distribution.
Ice phase in altocumulus clouds over Leipzig: remote sensing observations and detailed modelling
NASA Astrophysics Data System (ADS)
Simmel, M.; Bühl, J.; Ansmann, A.; Tegen, I.
2015-01-01
The present work combines remote sensing observations and detailed cloud modeling to investigate two altocumulus cloud cases observed over Leipzig, Germany. A suite of remote sensing instruments was able to detect primary ice at rather warm temperatures of -6 °C. For comparison, a second mixed phase case at about -25 °C is introduced. To further look into the details of cloud microphysical processes a simple dynamics model of the Asai-Kasahara type is combined with detailed spectral microphysics forming the model system AK-SPECS. Vertical velocities are prescribed to force the dynamics as well as main cloud features to be close to the observations. Subsequently, sensitivity studies with respect to ice microphysical parameters are carried out with the aim to quantify the most important sensitivities for the cases investigated. For the cases selected, the liquid phase is mainly determined by the model dynamics (location and strength of vertical velocity) whereas the ice phase is much more sensitive to the microphysical parameters (ice nuclei (IN) number, ice particle shape). The choice of ice particle shape may induce large uncertainties which are in the same order as those for the temperature-dependent IN number distribution.
NASA Astrophysics Data System (ADS)
Morency, Christina; Luo, Yang; Tromp, Jeroen
2011-05-01
The key issues in CO2 sequestration involve accurate monitoring, from the injection stage to the prediction and verification of CO2 movement over time, for environmental considerations. '4-D seismics' is a natural non-intrusive monitoring technique which involves 3-D time-lapse seismic surveys. Successful monitoring of CO2 movement requires a proper description of the physical properties of a porous reservoir. We investigate the importance of poroelasticity by contrasting poroelastic simulations with elastic and acoustic simulations. Discrepancies highlight a poroelastic signature that cannot be captured using an elastic or acoustic theory and that may play a role in accurately imaging and quantifying injected CO2. We focus on time-lapse crosswell imaging and model updating based on Fréchet derivatives, or finite-frequency sensitivity kernels, which define the sensitivity of an observable to the model parameters. We compare results of time-lapse migration imaging using acoustic, elastic (with and without the use of Gassmann's formulae) and poroelastic models. Our approach highlights the influence of using different physical theories for interpreting seismic data, and, more importantly, for extracting the CO2 signature from seismic waveforms. We further investigate the differences between imaging with the direct compressional wave, as is commonly done, versus using both direct compressional (P) and shear (S) waves. We conclude that, unlike direct P-wave traveltimes, a combination of direct P- and S-wave traveltimes constrains most parameters. Adding P- and S-wave amplitude information does not drastically improve parameter sensitivity, but it does improve spatial resolution of the injected CO2 zone. The main advantage of using a poroelastic theory lies in direct sensitivity to fluid properties. Simulations are performed using a spectral-element method, and finite-frequency sensitivity kernels are calculated using an adjoint method.
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
Scott, Sarah Nicole; Templeton, Jeremy Alan; Hough, Patricia Diane; ...
2014-01-01
This study details a methodology for quantification of errors and uncertainties of a finite element heat transfer model applied to a Ruggedized Instrumentation Package (RIP). The proposed verification and validation (V&V) process includes solution verification to examine errors associated with the code's solution techniques, and model validation to assess the model's predictive capability for quantities of interest. The model was subjected to mesh resolution and numerical parameters sensitivity studies to determine reasonable parameter values and to understand how they change the overall model response and performance criteria. To facilitate quantification of the uncertainty associated with the mesh, automatic meshing andmore » mesh refining/coarsening algorithms were created and implemented on the complex geometry of the RIP. Automated software to vary model inputs was also developed to determine the solution’s sensitivity to numerical and physical parameters. The model was compared with an experiment to demonstrate its accuracy and determine the importance of both modelled and unmodelled physics in quantifying the results' uncertainty. An emphasis is placed on automating the V&V process to enable uncertainty quantification within tight development schedules.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
NASA Astrophysics Data System (ADS)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.
2018-03-01
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.
Barreiros, Willian; Teodoro, George; Kurc, Tahsin; Kong, Jun; Melo, Alba C. M. A.; Saltz, Joel
2017-01-01
We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies. PMID:29081725
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; ...
2018-02-09
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g.more » RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.« less
NASA Astrophysics Data System (ADS)
Reyes, J. J.; Adam, J. C.; Tague, C.
2016-12-01
Grasslands play an important role in agricultural production as forage for livestock; they also provide a diverse set of ecosystem services including soil carbon (C) storage. The partitioning of C between above and belowground plant compartments (i.e. allocation) is influenced by both plant characteristics and environmental conditions. The objectives of this study are to 1) develop and evaluate a hybrid C allocation strategy suitable for grasslands, and 2) apply this strategy to examine the importance of various parameters related to biogeochemical cycling, photosynthesis, allocation, and soil water drainage on above and belowground biomass. We include allocation as an important process in quantifying the model parameter uncertainty, which identifies the most influential parameters and what processes may require further refinement. For this, we use the Regional Hydro-ecologic Simulation System, a mechanistic model that simulates coupled water and biogeochemical processes. A Latin hypercube sampling scheme was used to develop parameter sets for calibration and evaluation of allocation strategies, as well as parameter uncertainty analysis. We developed the hybrid allocation strategy to integrate both growth-based and resource-limited allocation mechanisms. When evaluating the new strategy simultaneously for above and belowground biomass, it produced a larger number of less biased parameter sets: 16% more compared to resource-limited and 9% more compared to growth-based. This also demonstrates its flexible application across diverse plant types and environmental conditions. We found that higher parameter importance corresponded to sub- or supra-optimal resource availability (i.e. water, nutrients) and temperature ranges (i.e. too hot or cold). For example, photosynthesis-related parameters were more important at sites warmer than the theoretical optimal growth temperature. Therefore, larger values of parameter importance indicate greater relative sensitivity in adequately representing the relevant process to capture limiting resources or manage atypical environmental conditions. These results may inform future experimental work by focusing efforts on quantifying specific parameters under various environmental conditions or across diverse plant functional types.
Sensitivity analysis of urban flood flows to hydraulic controls
NASA Astrophysics Data System (ADS)
Chen, Shangzhi; Garambois, Pierre-André; Finaud-Guyot, Pascal; Dellinger, Guilhem; Terfous, Abdelali; Ghenaim, Abdallah
2017-04-01
Flooding represents one of the most significant natural hazards on each continent and particularly in highly populated areas. Improving the accuracy and robustness of prediction systems has become a priority. However, in situ measurements of floods remain difficult while a better understanding of flood flow spatiotemporal dynamics along with dataset for model validations appear essential. The present contribution is based on a unique experimental device at the scale 1/200, able to produce urban flooding with flood flows corresponding to frequent to rare return periods. The influence of 1D Saint Venant and 2D Shallow water model input parameters on simulated flows is assessed using global sensitivity analysis (GSA). The tested parameters are: global and local boundary conditions (water heights and discharge), spatially uniform or distributed friction coefficient and or porosity respectively tested in various ranges centered around their nominal values - calibrated thanks to accurate experimental data and related uncertainties. For various experimental configurations a variance decomposition method (ANOVA) is used to calculate spatially distributed Sobol' sensitivity indices (Si's). The sensitivity of water depth to input parameters on two main streets of the experimental device is presented here. Results show that the closer from the downstream boundary condition on water height, the higher the Sobol' index as predicted by hydraulic theory for subcritical flow, while interestingly the sensitivity to friction decreases. The sensitivity indices of all lateral inflows, representing crossroads in 1D, are also quantified in this study along with their asymptotic trends along flow distance. The relationship between lateral discharge magnitude and resulting sensitivity index of water depth is investigated. Concerning simulations with distributed friction coefficients, crossroad friction is shown to have much higher influence on upstream water depth profile than street friction coefficients. This methodology could be applied to any urban flood configuration in order to better understand flow dynamics and repartition but also guide model calibration in the light of flow controls.
Johnson, T S; Andriacchi, T P; Erdman, A G
2004-01-01
Various uses of the screw or helical axis have previously been reported in the literature in an attempt to quantify the complex displacements and coupled rotations of in vivo human knee kinematics. Multiple methods have been used by previous authors to calculate the axis parameters, and it has been theorized that the mathematical stability and accuracy of the finite helical axis (FHA) is highly dependent on experimental variability and rotation increment spacing between axis calculations. Previous research has not addressed the sensitivity of the FHA for true in vivo data collection, as required for gait laboratory analysis. This research presents a controlled series of experiments simulating continuous data collection as utilized in gait analysis to investigate the sensitivity of the three-dimensional finite screw axis parameters of rotation, displacement, orientation and location with regard to time step increment spacing, utilizing two different methods for spatial location. Six-degree-of-freedom motion parameters are measured for an idealized rigid body knee model that is constrained to a planar motion profile for the purposes of error analysis. The kinematic data are collected using a multicamera optoelectronic system combined with an error minimization algorithm known as the point cluster method. Rotation about the screw axis is seen to be repeatable, accurate and time step increment insensitive. Displacement along the axis is highly dependent on time step increment sizing, with smaller rotation angles between calculations producing more accuracy. Orientation of the axis in space is accurate with only a slight filtering effect noticed during motion reversal. Locating the screw axis by a projected point onto the screw axis from the mid-point of the finite displacement is found to be less sensitive to motion reversal than finding the intersection of the axis with a reference plane. A filtering effect of the spatial location parameters was noted for larger time step increments during periods of little or no rotation.
Can histogram analysis of MR images predict aggressiveness in pancreatic neuroendocrine tumors?
De Robertis, Riccardo; Maris, Bogdan; Cardobi, Nicolò; Tinazzi Martini, Paolo; Gobbo, Stefano; Capelli, Paola; Ortolani, Silvia; Cingarlini, Sara; Paiella, Salvatore; Landoni, Luca; Butturini, Giovanni; Regi, Paolo; Scarpa, Aldo; Tortora, Giampaolo; D'Onofrio, Mirko
2018-06-01
To evaluate MRI derived whole-tumour histogram analysis parameters in predicting pancreatic neuroendocrine neoplasm (panNEN) grade and aggressiveness. Pre-operative MR of 42 consecutive patients with panNEN >1 cm were retrospectively analysed. T1-/T2-weighted images and ADC maps were analysed. Histogram-derived parameters were compared to histopathological features using the Mann-Whitney U test. Diagnostic accuracy was assessed by ROC-AUC analysis; sensitivity and specificity were assessed for each histogram parameter. ADC entropy was significantly higher in G2-3 tumours with ROC-AUC 0.757; sensitivity and specificity were 83.3 % (95 % CI: 61.2-94.5) and 61.1 % (95 % CI: 36.1-81.7). ADC kurtosis was higher in panNENs with vascular involvement, nodal and hepatic metastases (p= .008, .021 and .008; ROC-AUC= 0.820, 0.709 and 0.820); sensitivity and specificity were: 85.7/74.3 % (95 % CI: 42-99.2 /56.4-86.9), 36.8/96.5 % (95 % CI: 17.2-61.4 /76-99.8) and 100/62.8 % (95 % CI: 56.1-100/44.9-78.1). No significant differences between groups were found for other histogram-derived parameters (p >.05). Whole-tumour histogram analysis of ADC maps may be helpful in predicting tumour grade, vascular involvement, nodal and liver metastases in panNENs. ADC entropy and ADC kurtosis are the most accurate parameters for identification of panNENs with malignant behaviour. • Whole-tumour ADC histogram analysis can predict aggressiveness in pancreatic neuroendocrine neoplasms. • ADC entropy and kurtosis are higher in aggressive tumours. • ADC histogram analysis can quantify tumour diffusion heterogeneity. • Non-invasive quantification of tumour heterogeneity can provide adjunctive information for prognostication.
Bearing damage assessment using Jensen-Rényi Divergence based on EEMD
NASA Astrophysics Data System (ADS)
Singh, Jaskaran; Darpe, A. K.; Singh, S. P.
2017-03-01
An Ensemble Empirical Mode Decomposition (EEMD) and Jensen Rényi divergence (JRD) based methodology is proposed for the degradation assessment of rolling element bearings using vibration data. The EEMD decomposes vibration signals into a set of intrinsic mode functions (IMFs). A systematic methodology to select IMFs that are sensitive and closely related to the fault is proposed in the paper. The change in probability distribution of the energies of the sensitive IMFs is measured through JRD which acts as a damage identification parameter. Evaluation of JRD with sensitive IMFs makes it largely unaffected by change/fluctuations in operating conditions. Further, an algorithm based on Chebyshev's inequality is applied to JRD to identify exact points of change in bearing health and remove outliers. The identified change points are investigated for fault classification as possible locations where specific defect initiation could have taken place. For fault classification, two new parameters are proposed: 'α value' and Probable Fault Index, which together classify the fault. To standardize the degradation process, a Confidence Value parameter is proposed to quantify the bearing degradation value in a range of zero to unity. A simulation study is first carried out to demonstrate the robustness of the proposed JRD parameter under variable operating conditions of load and speed. The proposed methodology is then validated on experimental data (seeded defect data and accelerated bearing life test data). The first validation on two different vibration datasets (inner/outer) obtained from seeded defect experiments demonstrate the effectiveness of JRD parameter in detecting a change in health state as the severity of fault changes. The second validation is on two accelerated life tests. The results demonstrate the proposed approach as a potential tool for bearing performance degradation assessment.
SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations
Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.
2016-02-25
Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less
Genetically encoded ratiometric fluorescent thermometer with wide range and rapid response
Nakano, Masahiro; Arai, Yoshiyuki; Kotera, Ippei; Okabe, Kohki; Kamei, Yasuhiro; Nagai, Takeharu
2017-01-01
Temperature is a fundamental physical parameter that plays an important role in biological reactions and events. Although thermometers developed previously have been used to investigate several important phenomena, such as heterogeneous temperature distribution in a single living cell and heat generation in mitochondria, the development of a thermometer with a sensitivity over a wide temperature range and rapid response is still desired to quantify temperature change in not only homeotherms but also poikilotherms from the cellular level to in vivo. To overcome the weaknesses of the conventional thermometers, such as a limitation of applicable species and a low temporal resolution, owing to the narrow temperature range of sensitivity and the thermometry method, respectively, we developed a genetically encoded ratiometric fluorescent temperature indicator, gTEMP, by using two fluorescent proteins with different temperature sensitivities. Our thermometric method enabled a fast tracking of the temperature change with a time resolution of 50 ms. We used this method to observe the spatiotemporal temperature change between the cytoplasm and nucleus in cells, and quantified thermogenesis from the mitochondria matrix in a single living cell after stimulation with carbonyl cyanide 4-(trifluoromethoxy)phenylhydrazone, which was an uncoupler of oxidative phosphorylation. Moreover, exploiting the wide temperature range of sensitivity from 5°C to 50°C of gTEMP, we monitored the temperature in a living medaka embryo for 15 hours and showed the feasibility of in vivo thermometry in various living species. PMID:28212432
Myers, Casey A.; Laz, Peter J.; Shelburne, Kevin B.; Davidson, Bradley S.
2015-01-01
Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse kinematics, inverse dynamics, and muscle force prediction. A series of Monte Carlo simulations were performed that considered intrarater variability in marker placement, movement artifacts in each phase of gait, variability in body segment parameters, and variability in muscle parameters calculated from cadaveric investigations. Propagation of uncertainty was performed by also using the output distributions from one stage as input distributions to subsequent stages. Confidence bounds (5–95%) and sensitivity of outputs to model input parameters were calculated throughout the gait cycle. The combined impact of uncertainty resulted in mean bounds that ranged from 2.7° to 6.4° in joint kinematics, 2.7 to 8.1 N m in joint moments, and 35.8 to 130.8 N in muscle forces. The impact of movement artifact was 1.8 times larger than any other propagated source. Sensitivity to specific body segment parameters and muscle parameters were linked to where in the gait cycle they were calculated. We anticipate that through the increased use of probabilistic tools, researchers will better understand the strengths and limitations of their musculoskeletal simulations and more effectively use simulations to evaluate hypotheses and inform clinical decisions. PMID:25404535
Masci, Ilaria; Vannozzi, Giuseppe; Bergamini, Elena; Pesce, Caterina; Getchell, Nancy; Cappozzo, Aurelio
2013-04-01
Objective quantitative evaluation of motor skill development is of increasing importance to carefully drive physical exercise programs in childhood. Running is a fundamental motor skill humans adopt to accomplish locomotion, which is linked to physical activity levels, although the assessment is traditionally carried out using qualitative evaluation tests. The present study aimed at investigating the feasibility of using inertial sensors to quantify developmental differences in the running pattern of young children. Qualitative and quantitative assessment tools were adopted to identify a skill-sensitive set of biomechanical parameters for running and to further our understanding of the factors that determine progression to skilled running performance. Running performances of 54 children between the ages of 2 and 12 years were submitted to both qualitative and quantitative analysis, the former using sequences of developmental level, the latter estimating temporal and kinematic parameters from inertial sensor measurements. Discriminant analysis with running developmental level as dependent variable allowed to identify a set of temporal and kinematic parameters, within those obtained with the sensor, that best classified children into the qualitative developmental levels (accuracy higher than 67%). Multivariate analysis of variance with the quantitative parameters as dependent variables allowed to identify whether and which specific parameters or parameter subsets were differentially sensitive to specific transitions between contiguous developmental levels. The findings showed that different sets of temporal and kinematic parameters are able to tap all steps of the transitional process in running skill described through qualitative observation and can be prospectively used for applied diagnostic and sport training purposes. Copyright © 2012 Elsevier B.V. All rights reserved.
Nguyen, Richard; Perfetto, Stephen; Mahnke, Yolanda D; Chattopadhyay, Pratip; Roederer, Mario
2013-03-01
After compensation, the measurement errors arising from multiple fluorescences spilling into each detector become evident by the spreading of nominally negative distributions. Depending on the instrument configuration and performance, and reagents used, this "spillover spreading" (SS) affects sensitivity in any given parameter. The degree of SS had been predicted theoretically to increase with measurement error, i.e., by the square root of fluorescence intensity, as well as directly related to the spectral overlap matrix coefficients. We devised a metric to quantify SS between any pair of detectors. This metric is intrinsic, as it is independent of fluorescence intensity. The combination of all such values for one instrument can be represented as a spillover spreading matrix (SSM). Single-stained controls were used to determine the SSM on multiple instruments over time, and under various conditions of signal quality. SSM values reveal fluorescence spectrum interactions that can limit the sensitivity of a reagent in the presence of brightly-stained cells on a different color. The SSM was found to be highly reproducible; its non-trivial values show a CV of less than 30% across a 2-month time frame. In addition, the SSM is comparable between similarly-configured instruments; instrument-specific differences in the SSM reveal underperforming detectors. Quantifying and monitoring the SSM can be a useful tool in instrument quality control to ensure consistent sensitivity and performance. In addition, the SSM is a key element for predicting the performance of multicolor immunofluorescence panels, which will aid in the optimization and development of new panels. We propose that the SSM is a critical component of QA/QC in evaluation of flow cytometer performance. Published 2013 Wiley Periodicals, Inc.
23rd Annual National Test and Evaluation Conference
2007-03-15
parameters (i.e., means) of the input variables to minimize dpm. LSL USL μ2 μ1 μ2 LSL USL μ1 Robust Design Page 35©2007 Air Academy Associates, LLC. Do...Associates, LLC. Do Not Reproduce. Simplify and Perfect LSL USL LSL USL The process of quantifying the sensitivity of the output (y) dpm to changes...greater impact on the dpm of Z (impedance)? R2 ~ N (100,22 ) LSL = 31 USL = 35 R1 • R2 R1 + R2 Impedance Example Page 39©2007 Air Academy Associates, LLC
Uncertainty quantification for PZT bimorph actuators
NASA Astrophysics Data System (ADS)
Bravo, Nikolas; Smith, Ralph C.; Crews, John
2018-03-01
In this paper, we discuss the development of a high fidelity model for a PZT bimorph actuator used for micro-air vehicles, which includes the Robobee. We developed a high-fidelity model for the actuator using the homogenized energy model (HEM) framework, which quantifies the nonlinear, hysteretic, and rate-dependent behavior inherent to PZT in dynamic operating regimes. We then discussed an inverse problem on the model. We included local and global sensitivity analysis of the parameters in the high-fidelity model. Finally, we will discuss the results of Bayesian inference and uncertainty quantification on the HEM.
NASA Astrophysics Data System (ADS)
Kolbe, T.; Abbott, B. W.; Marçais, J.; Thomas, Z.; Aquilina, L.; Labasque, T.; Pinay, G.; De Dreuzy, J. R.
2016-12-01
Groundwater transit time and flow path are key factors controlling nitrogen retention and removal capacity at the catchment scale (Abbott et al., 2016), but the relative importance of hydrogeological and topographical factors in determining these parameters remains uncertain (Kolbe et al., 2016). To address this unknown, we used numerical modelling techniques calibrated with CFC groundwater age data to quantify transit time and flow path in an unconfined aquifer in Brittany, France. We assessed the relative importance of parameters (aquifer depth, porosity, arrangement of geological layers, and permeability profile), hydrology (recharge rate), and topography in determining characteristic flow distances (Leray et al., 2016). We found that groundwater flow was highly local (mean travel distance of 350 m) but also relatively old (mean CFC age of 40 years). Sensitivity analysis revealed that groundwater travel distances were not sensitive to geological parameters within the constraints of the CFC age data. However, circulation was sensitive to topography in lowland areas where the groundwater table was close to the land surface, and to recharge rate in upland areas where water input modulated the free surface of the aquifer. We quantified these differences with a local groundwater ratio (rGW-LOCAL) defined as the mean groundwater travel distance divided by the equivalent surface distance water would have traveled along the land surface. Lowland rGW-LOCAL was near 1, indicating primarily topographic controls. Upland rGW-LOCALwas 1.6, meaning the groundwater recharge area was substantially larger than the topographically-defined catchment. This ratio was applied to other catchments in Brittany to test its relevance in comparing controls on groundwater circulation within and among catchments. REFERENCES Abbott et al., 2016, Using multi-tracer inference to move beyond single-catchment ecohydrology. Earth-Science Reviews. Kolbe et al., 2016, Coupling 3D groundwater modeling with CFC-based age dating to classify local groundwater circulation in an unconfined crystalline aquifer. J. Hydrol. Leray et al., 2016, Residence time distributions for hydrologic systems: Mechanistic foundations and steady-state analytical solutions. J. Hydrol.
Analyzing the quality robustness of chemotherapy plans with respect to model uncertainties.
Hoffmann, Anna; Scherrer, Alexander; Küfer, Karl-Heinz
2015-01-01
Mathematical models of chemotherapy planning problems contain various biomedical parameters, whose values are difficult to quantify and thus subject to some uncertainty. This uncertainty propagates into the therapy plans computed on these models, which poses the question of robustness to the expected therapy quality. This work introduces a combined approach for analyzing the quality robustness of plans in terms of dosing levels with respect to model uncertainties in chemotherapy planning. It uses concepts from multi-criteria decision making for studying parameters related to the balancing between the different therapy goals, and concepts from sensitivity analysis for the examination of parameters describing the underlying biomedical processes and their interplay. This approach allows for a profound assessment of a therapy plan, how stable its quality is with respect to parametric changes in the used mathematical model. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avonto, Cristina; Chittiboyina, Amar G.; Rua, Diego
2015-12-01
Skin sensitization is an important toxicological end-point in the risk assessment of chemical allergens. Because of the complexity of the biological mechanisms associated with skin sensitization, integrated approaches combining different chemical, biological and in silico methods are recommended to replace conventional animal tests. Chemical methods are intended to characterize the potential of a sensitizer to induce earlier molecular initiating events. The presence of an electrophilic mechanistic domain is considered one of the essential chemical features to covalently bind to the biological target and induce further haptenation processes. Current in chemico assays rely on the quantification of unreacted model nucleophiles aftermore » incubation with the candidate sensitizer. In the current study, a new fluorescence-based method, ‘HTS-DCYA assay’, is proposed. The assay aims at the identification of reactive electrophiles based on their chemical reactivity toward a model fluorescent thiol. The reaction workflow enabled the development of a High Throughput Screening (HTS) method to directly quantify the reaction adducts. The reaction conditions have been optimized to minimize solubility issues, oxidative side reactions and increase the throughput of the assay while minimizing the reaction time, which are common issues with existing methods. Thirty-six chemicals previously classified with LLNA, DPRA or KeratinoSens™ were tested as a proof of concept. Preliminary results gave an estimated 82% accuracy, 78% sensitivity, 90% specificity, comparable to other in chemico methods such as Cys-DPRA. In addition to validated chemicals, six natural products were analyzed and a prediction of their sensitization potential is presented for the first time. - Highlights: • A novel fluorescence-based method to detect electrophilic sensitizers is proposed. • A model fluorescent thiol was used to directly quantify the reaction products. • A discussion of the reaction workflow and critical parameters is presented. • The method could provide a useful tool to complement existing chemical assays.« less
NASA Astrophysics Data System (ADS)
Kar, Supratik; Roy, Juganta K.; Leszczynski, Jerzy
2017-06-01
Advances in solar cell technology require designing of new organic dye sensitizers for dye-sensitized solar cells with high power conversion efficiency to circumvent the disadvantages of silicon-based solar cells. In silico studies including quantitative structure-property relationship analysis combined with quantum chemical analysis were employed to understand the primary electron transfer mechanism and photo-physical properties of 273 arylamine organic dyes from 11 diverse chemical families explicit to iodine electrolyte. The direct quantitative structure-property relationship models enable identification of the essential electronic and structural attributes necessary for quantifying the molecular prerequisites of 11 classes of arylamine organic dyes, responsible for high power conversion efficiency of dye-sensitized solar cells. Tetrahydroquinoline, N,N'-dialkylaniline and indoline have been least explored classes under arylamine organic dyes for dye-sensitized solar cells. Therefore, the identified properties from the corresponding quantitative structure-property relationship models of the mentioned classes were employed in designing of "lead dyes". Followed by, a series of electrochemical and photo-physical parameters were computed for designed dyes to check the required variables for electron flow of dye-sensitized solar cells. The combined computational techniques yielded seven promising lead dyes each for all three chemical classes considered. Significant (130, 183, and 46%) increment in predicted %power conversion efficiency was observed comparing with the existing dye with highest experimental %power conversion efficiency value for tetrahydroquinoline, N,N'-dialkylaniline and indoline, respectively maintaining required electrochemical parameters.
Two statistics for evaluating parameter identifiability and error reduction
Doherty, John; Hunt, Randall J.
2009-01-01
Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reeve, Samuel Temple; Strachan, Alejandro, E-mail: strachan@purdue.edu
We use functional, Fréchet, derivatives to quantify how thermodynamic outputs of a molecular dynamics (MD) simulation depend on the potential used to compute atomic interactions. Our approach quantifies the sensitivity of the quantities of interest with respect to the input functions as opposed to its parameters as is done in typical uncertainty quantification methods. We show that the functional sensitivity of the average potential energy and pressure in isothermal, isochoric MD simulations using Lennard–Jones two-body interactions can be used to accurately predict those properties for other interatomic potentials (with different functional forms) without re-running the simulations. This is demonstrated undermore » three different thermodynamic conditions, namely a crystal at room temperature, a liquid at ambient pressure, and a high pressure liquid. The method provides accurate predictions as long as the change in potential can be reasonably described to first order and does not significantly affect the region in phase space explored by the simulation. The functional uncertainty quantification approach can be used to estimate the uncertainties associated with constitutive models used in the simulation and to correct predictions if a more accurate representation becomes available.« less
The detection of He in tungsten following ion implantation by laser-induced breakdown spectroscopy
Shaw, Guinevere C.; Bannister, Mark E.; Biewer, Theodore M.; ...
2017-09-08
Laser-induced breakdown spectroscopy (LIBS) results are presented that provide depth-resolved identification of He implanted in polycrystalline tungsten (PC-W) targets by a 200 keV He+ ion beam, with a surface temperature of approximately 900 °C and a peak fluence of 10 23 m –2. He retention, and the influence of He on deuterium and tritium recycling, permeation, and retention in PC-W plasma facing components are important questions for the divertor and plasma facing components in a fusion reactor, yet are difficult to quantify. The purpose of this work is to demonstrate the ability of LIBS to identify helium in tungsten; tomore » investigate the sensitivity of laser parameters including, laser energy and gate delay, that directly influence the sensitivity and depth resolution of LIBS; and to perform a proof-of-principle experiment using LIBS to measure relative He intensities as a function of depth. In conclusion, the results presented demonstrate the potential not only to identify helium but also to develop a methodology to quantify gaseous impurity concentration in PC-W as a function of depth.« less
Dos Muchangos, Leticia Sarmento; Tokai, Akihiro; Hanashima, Atsuko
2017-01-01
Material flow analysis can effectively trace and quantify the flows and stocks of materials such as solid wastes in urban environments. However, the integrity of material flow analysis results is compromised by data uncertainties, an occurrence that is particularly acute in low-and-middle-income study contexts. This article investigates the uncertainties in the input data and their effects in a material flow analysis study of municipal solid waste management in Maputo City, the capital of Mozambique. The analysis is based on data collected in 2007 and 2014. Initially, the uncertainties and their ranges were identified by the data classification model of Hedbrant and Sörme, followed by the application of sensitivity analysis. The average lower and upper bounds were 29% and 71%, respectively, in 2007, increasing to 41% and 96%, respectively, in 2014. This indicates higher data quality in 2007 than in 2014. Results also show that not only data are partially missing from the established flows such as waste generation to final disposal, but also that they are limited and inconsistent in emerging flows and processes such as waste generation to material recovery (hence the wider variation in the 2014 parameters). The sensitivity analysis further clarified the most influencing parameter and the degree of influence of each parameter on the waste flows and the interrelations among the parameters. The findings highlight the need for an integrated municipal solid waste management approach to avoid transferring or worsening the negative impacts among the parameters and flows.
Structural reliability methods: Code development status
NASA Astrophysics Data System (ADS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-05-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Structural reliability methods: Code development status
NASA Technical Reports Server (NTRS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-01-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
van den Noort, Josien C; Sloot, Lizeth H; Bruijn, Sjoerd M; Harlaar, Jaap
2017-08-16
Knee instability is a major problem in patients with anterior cruciate ligament injury or knee osteoarthritis. A valid and clinically meaningful measure for functional knee instability is lacking. The concept of the gait sensitivity norm, the normalized perturbation response of a walking system to external perturbations, could be a sensible way to quantify knee instability. The aim of this study is to explore the feasibility of this concept for measurement of knee responses, using controlled external perturbations during walking in healthy subjects. Nine young healthy participants walked on a treadmill, while three dimensional kinematics were measured. Sudden lateral translations of the treadmill were applied at five different intensities during stance. Right knee kinematic responses and spatio-temporal parameters were tracked for the perturbed stride and following four cycles, to calculate perturbation response and gait sensitivity norm values (i.e. response/perturbation) in various ways. The perturbation response values in terms of knee flexion and abduction increased with perturbation intensity and decreased with an increased number of steps after perturbation. For flexion and ab/adduction during midswing, the gait sensitivity norm values were shown to be constant over perturbation intensities, demonstrating the potential of the gait sensitivity norm as a robust measure of knee responses to perturbations. These results show the feasibility of using the gait sensitivity norm concept for certain gait indicators based on kinematics of the knee, as a measure of responses during perturbed gait. The current findings in healthy subjects could serve as reference-data to quantify pathological knee instability. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Vagos, Márcia R.; Arevalo, Hermenegild; de Oliveira, Bernardo Lino; Sundnes, Joakim; Maleckar, Mary M.
2017-09-01
Models of cardiac cell electrophysiology are complex non-linear systems which can be used to gain insight into mechanisms of cardiac dynamics in both healthy and pathological conditions. However, the complexity of cardiac models can make mechanistic insight difficult. Moreover, these are typically fitted to averaged experimental data which do not incorporate the variability in observations. Recently, building populations of models to incorporate inter- and intra-subject variability in simulations has been combined with sensitivity analysis (SA) to uncover novel ionic mechanisms and potentially clarify arrhythmogenic behaviors. We used the Koivumäki human atrial cell model to create two populations, representing normal Sinus Rhythm (nSR) and chronic Atrial Fibrillation (cAF), by varying 22 key model parameters. In each population, 14 biomarkers related to the action potential and dynamic restitution were extracted. Populations were calibrated based on distributions of biomarkers to obtain reasonable physiological behavior, and subjected to SA to quantify correlations between model parameters and pro-arrhythmia markers. The two populations showed distinct behaviors under steady state and dynamic pacing. The nSR population revealed greater variability, and more unstable dynamic restitution, as compared to the cAF population, suggesting that simulated cAF remodeling rendered cells more stable to parameter variation and rate adaptation. SA revealed that the biomarkers depended mainly on five ionic currents, with noted differences in sensitivities to these between nSR and cAF. Also, parameters could be selected to produce a model variant with no alternans and unaltered action potential morphology, highlighting that unstable dynamical behavior may be driven by specific cell parameter settings. These results ultimately suggest that arrhythmia maintenance in cAF may not be due to instability in cell membrane excitability, but rather due to tissue-level effects which promote initiation and maintenance of reentrant arrhythmia.
Surface topography analysis and performance on post-CMP images (Conference Presentation)
NASA Astrophysics Data System (ADS)
Lee, Jusang; Bello, Abner F.; Kakita, Shinichiro; Pieniazek, Nicholas; Johnson, Timothy A.
2017-03-01
Surface topography on post-CMP processing can be measured with white light interference microscopy to determine the planarity. Results are used to avoid under or over polishing and to decrease dishing. The numerical output of the surface topography is the RMS (root-mean-square) of the height. Beyond RMS, the topography image is visually examined and not further quantified. Subjective comparisons of the height maps are used to determine optimum CMP process conditions. While visual comparison of height maps can determine excursions, it's only through manual inspection of the images. In this work we describe methods of quantifying post-CMP surface topography characteristics that are used in other technical fields such as geography and facial-recognition. The topography image is divided into small surface patches of 7x7 pixels. Each surface patch is fitted to an analytic surface equation, in this case a third order polynomial, from which the gradient, directional derivatives, and other characteristics are calculated. Based on the characteristics, the surface patch is labeled as peak, ridge, flat, saddle, ravine, pit or hillside. The number of each label and thus the associated histogram is then used as a quantified characteristic of the surface topography, and could be used as a parameter for SPC (statistical process control) charting. In addition, the gradient for each surface patch is calculated, so the average, maximum, and other characteristics of the gradient distribution can be used for SPC. Repeatability measurements indicate high confidence where individual labels can be lower than 2% relative standard deviation. When the histogram is considered, an associated chi-squared value can be defined from which to compare other measurements. The chi-squared value of the histogram is a very sensitive and quantifiable parameter to determine the within wafer and wafer-to-wafer topography non-uniformity. As for the gradient histogram distribution, the chi-squared could again be calculated and used as yet another quantifiable parameter for SPC. In this work we measured the post Cu CMP of a die designed for 14nm technology. A region of interest (ROI) known to be indicative of the CMP processing is chosen for the topography analysis. The ROI, of size 1800 x 2500 pixels where each pixel represents 2um, was repeatably measured. We show the sensitivity based on measurements and the comparison between center and edge die measurements. The topography measurements and surface patch analysis were applied to hundreds of images representing the periodic process qualification runs required to control and verify CMP performance and tool matching. The analysis is shown to be sensitive to process conditions that vary in polishing time, type of slurry, CMP tool manufacturer, and CMP pad lifetime. Keywords: Keywords: CMP, Topography, Image Processing, Metrology, Interference microscopy, surface processing [1] De Lega, Xavier Colonna, and Peter De Groot. "Optical topography measurement of patterned wafers." Characterization and Metrology for ULSI Technology 2005 788 (2005): 432-436. [2] de Groot, Peter. "Coherence scanning interferometry." Optical Measurement of Surface Topography. Springer Berlin Heidelberg, 2011. 187-208. [3] Watson, Layne T., Thomas J. Laffey, and Robert M. Haralick. "Topographic classification of digital image intensity surfaces using generalized splines and the discrete cosine transformation." Computer Vision, Graphics, and Image Processing 29.2 (1985): 143-167. [4] Wang, Jun, et al. "3D facial expression recognition based on primitive surface feature distribution." Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on. Vol. 2. IEEE, 2006.
Fluorescence lifetime as a new parameter in analytical cytology measurements
NASA Astrophysics Data System (ADS)
Steinkamp, John A.; Deka, Chiranjit; Lehnert, Bruce E.; Crissman, Harry A.
1996-05-01
A phase-sensitive flow cytometer has been developed to quantify fluorescence decay lifetimes on fluorochrome-labeled cells/particles. This instrument combines flow cytometry (FCM) and frequency-domain fluorescence spectroscopy measurement principles to provide unique capabilities for making phase-resolved lifetime measurements, while preserving conventional FCM capabilities. Cells are analyzed as they intersect a high-frequency, intensity-modulated (sine wave) laser excitation beam. Fluorescence signals are processed by conventional and phase-sensitive signal detection electronics and displayed as frequency distribution histograms. In this study we describe results of fluorescence intensity and lifetime measurements on fluorescently labeled particles, cells, and chromosomes. Examples of measurements on intrinsic cellular autofluorescence, cells labeled with immunofluorescence markers for cell- surface antigens, mitochondria stains, and on cellular DNA and protein binding fluorochromes will be presented to illustrate unique differences in measured lifetimes and changes caused by fluorescence quenching. This innovative technology will be used to probe fluorochrome/molecular interactions in the microenvironment of cells/chromosomes as a new parameter and thus expand the researchers' understanding of biochemical processes and structural features at the cellular and molecular level.
Why morphology matters in birds and UAV's: How scale affects attitude wind sensitivity
NASA Astrophysics Data System (ADS)
Gamble, L. L.; Inman, D. J.
2017-11-01
Although natural fliers have been shown to morph their geometry to adapt to unfavorable wind loading, there exists heavy skepticism within the aviation community regarding the benefits and necessity of morphing aircraft technology. Here, we develop a vector derivation that characterizes how high winds affect the overall flight velocity and sideslip for both natural and manmade fliers. This derivation is formulated in such a way that only a single non-dimensional velocity parameter is needed to quantify the response. We show mathematically that in high winds, low-altitude fliers are more prone to substantial changes in the sideslip angle, struggle to maintain gliding velocity, and experience five times the peak sideslip sensitivity when compared to high-altitude fliers. In order to counteract these adverse changes, low-altitude fliers require a high degree of controllability which can be achieved through extreme morphological changes. The results presented here highlight the importance of integrating morphing concepts into future low-altitude aircraft designs and provide a formulation to help designers decide whether or not to pursue adaptive morphing technology based on a single readily determinable parameter.
In vivo assessment of peripheral nerve regeneration by diffusion tensor imaging.
Morisaki, Shinsuke; Kawai, Yuko; Umeda, Masahiro; Nishi, Mayumi; Oda, Ryo; Fujiwara, Hiroyoshi; Yamada, Kei; Higuchi, Toshihiro; Tanaka, Chuzo; Kawata, Mitsuhiro; Kubo, Toshikazu
2011-03-01
To evaluate the sensitivity of diffusion tensor imaging (DTI) in assessing peripheral nerve regeneration in vivo. We assessed the changes in the DTI parameters and histological analyses after nerve injury to examine degeneration and regeneration in the rat sciatic nerves. For magnetic resonance imaging (MRI), 16 rats were randomly divided into two groups: group P (permanently crushed; n = 7) and group T (temporally crushed; n = 9). Serial MRI of the right leg was performed before the operation, and then performed at the timepoints of 1, 2, 3, and 4 weeks after the crush injury. The changes in fractional anisotropy (FA), axial diffusivity (λ(∥)), and radial diffusivity (λ(⟂)) were quantified. For histological analyses, the number of axons and the myelinated axon areas were quantified. Decreased FA and increased λ(⟂) were observed in the degenerative phase, and increased FA and decreased λ(⟂) were observed in the regenerative phase. The changes in FA and λ(⟂) were strongly correlated with histological changes, including axonal and myelin regeneration. DTI parameters, especially λ(⟂) , can be good indicators for peripheral nerve regeneration and can be applied as noninvasive diagnostic tools for a variety of neurological diseases. Copyright © 2011 Wiley-Liss, Inc.
Real-time monitoring of thermodynamic microenvironment in a pan coater.
Pandey, Preetanshu; Bindra, Dilbir S
2013-02-01
The current study demonstrates the use of tablet-size data logging devices (PyroButtons) to quantify the microenvironment experienced by tablets during pan coating process. PyroButtons were fixed at the inlet and exhaust plenums, and were also placed to freely move with the tablets. The effects of process parameters (spray rate and inlet-air humidity) on the thermodynamic conditions inside the pan coater were studied. It was shown that the same exhaust temperature (a parameter most commonly monitored and controlled during film coating) can be attained with very different tablet-bed conditions. The tablet-bed conditions were found to be more sensitive to the changes in spray rate as compared with the inlet-air humidity. Both spray rate and inlet-air humidity were shown to have an effect on the number of tablet defects (loss of logo definition), and a good correlation between number of tablet defects and tablet-bed humidity was observed. The ability to quantify the thermodynamic microenvironment experienced by the tablets during coating and be able to correlate that to macroscopic tablet defects can be an invaluable tool that can help to establish a process design space during product development. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Steinschneider, S.; Wi, S.; Brown, C. M.
2013-12-01
Flood risk management performance is investigated within the context of integrated climate and hydrologic modeling uncertainty to explore system robustness. The research question investigated is whether structural and hydrologic parameterization uncertainties are significant relative to other uncertainties such as climate change when considering water resources system performance. Two hydrologic models are considered, a conceptual, lumped parameter model that preserves the water balance and a physically-based model that preserves both water and energy balances. In the conceptual model, parameter and structural uncertainties are quantified and propagated through the analysis using a Bayesian modeling framework with an innovative error model. Mean climate changes and internal climate variability are explored using an ensemble of simulations from a stochastic weather generator. The approach presented can be used to quantify the sensitivity of flood protection adequacy to different sources of uncertainty in the climate and hydrologic system, enabling the identification of robust projects that maintain adequate performance despite the uncertainties. The method is demonstrated in a case study for the Coralville Reservoir on the Iowa River, where increased flooding over the past several decades has raised questions about potential impacts of climate change on flood protection adequacy.
The Influence of Boundary Layer Parameters on Interior Noise
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Rocha, Joana
2012-01-01
Predictions of the wall pressure in the turbulent boundary of an aerospace vehicle can differ substantially from measurement due to phenomena that are not well understood. Characterizing the phenomena will require additional testing at considerable cost. Before expending scarce resources, it is desired to quantify the effect of the uncertainty in wall pressure predictions and measurements on structural response and acoustic radiation. A sensitivity analysis is performed on four parameters of the Corcos cross spectrum model: power spectrum, streamwise and cross stream coherence lengths and Mach number. It is found that at lower frequencies where high power levels and long coherence lengths exist, the radiated sound power prediction has up to 7 dB of uncertainty in power spectrum levels with streamwise and cross stream coherence lengths contributing equally to the total.
Thakran, S; Gupta, P K; Kabra, V; Saha, I; Jain, P; Gupta, R K; Singh, A
2018-06-14
The objective of this study was to quantify the hemodynamic parameters using first pass analysis of T 1 -perfusion magnetic resonance imaging (MRI) data of human breast and to compare these parameters with the existing tracer kinetic parameters, semi-quantitative and qualitative T 1 -perfusion analysis in terms of lesion characterization. MRI of the breast was performed in 50 women (mean age, 44±11 [SD] years; range: 26-75) years with a total of 15 benign and 35 malignant breast lesions. After pre-processing, T 1 -perfusion MRI data was analyzed using qualitative approach by two radiologists (visual inspection of the kinetic curve into types I, II or III), semi-quantitative (characterization of kinetic curve types using empirical parameters), generalized-tracer-kinetic-model (tracer kinetic parameters) and first pass analysis (hemodynamic-parameters). Chi-squared test, t-test, one-way analysis-of-variance (ANOVA) using Bonferroni post-hoc test and receiver-operating-characteristic (ROC) curve were used for statistical analysis. All quantitative parameters except leakage volume (Ve), qualitative (type-I and III) and semi-quantitative curves (type-I and III) provided significant differences (P<0.05) between benign and malignant lesions. Kinetic parameters, particularly volume transfer coefficient (K trans ) provided a significant difference (P<0.05) between all grades except grade-II vs III. The hemodynamic parameter (relative-leakage-corrected-breast-blood-volume [rBBVcorr) provided a statistically significant difference (P<0.05) between all grades. It also provided highest sensitivity and specificity among all parameters in differentiation between different grades of malignant breast lesions. Quantitative parameters, particularly rBBVcorr and K trans provided similar sensitivity and specificity in differentiating benign from malignant breast lesions for this cohort. Moreover, rBBVcorr provided better differentiation between different grades of malignant breast lesions among all the parameters. Copyright © 2018. Published by Elsevier Masson SAS.
NASA Astrophysics Data System (ADS)
Simmel, Martin; Bühl, Johannes; Ansmann, Albert; Tegen, Ina
2015-04-01
The present work combines remote sensing observations and detailed microphysics cloud modeling to investigate two altocumulus cloud cases observed over Leipzig, Germany. A suite of remote sensing instruments was able to detect primary ice at rather warm temperatures of -6°C. For comparison, a second mixed phase case at about -25°C is introduced. To further look into the details of cloud microphysical processes a simple dynamics model of the Asai-Kasahara type is combined with detailed spectral microphysics forming the model system AK-SPECS. Temperature and humidity profiles are taken either from observation (radiosonde) or GDAS reanalysis. Vertical velocities are prescribed to force the dynamics as well as main cloud features to be close to the observations. Subsequently, sensitivity studies with respect to dynamical as well as ice microphysical parameters are carried out with the aim to quantify the most important sensitivities for the cases investigated. For the cases selected, the liquid phase is mainly determined by the model dynamics (location and strength of vertical velocity) whereas the ice phase is much more sensitive to the microphysical parameters (ice nuclei (IN) number, ice particle shape). The choice of ice particle shape may induce large uncertainties which are in the same order as those for the temperature-dependent IN number distribution.
Probabilistic Modeling of Ceramic Matrix Composite Strength
NASA Technical Reports Server (NTRS)
Shan, Ashwin R.; Murthy, Pappu L. N.; Mital, Subodh K.; Bhatt, Ramakrishna T.
1998-01-01
Uncertainties associated with the primitive random variables such as manufacturing process (processing temperature, fiber volume ratio, void volume ratio), constituent properties (fiber, matrix and interface), and geometric parameters (ply thickness, interphase thickness) have been simulated to quantify the scatter in the first matrix cracking strength (FMCS) and the ultimate tensile strength of SCS-6/RBSN (SiC fiber (SCS-6) reinforced reaction-bonded silicon nitride composite) ceramic matrix composite laminate at room temperature. Cumulative probability distribution function for the FMCS and ultimate tensile strength at room temperature (RT) of (0)(sub 8), (0(sub 2)/90(sub 2), and (+/-45(sub 2))(sub S) laminates have been simulated and the sensitivity of primitive variables to the respective strengths have been quantified. Computationally predicted scatter of the strengths for a uniaxial laminate have been compared with those from limited experimental data. Also the experimental procedure used in the tests has been described briefly. Results show a very good agreement between the computational simulation and the experimental data. Dominating failure modes in (0)(sub 8), (0/90)(sub s) and (+/-45)(sub S) laminates have been identified. Results indicate that the first matrix cracking strength for the (0)(sub S), and (0/90)(sub S) laminates is sensitive to the thermal properties, modulus and strengths of both the fiber and matrix whereas the ultimate tensile strength is sensitive to the fiber strength and the fiber volume ratio. In the case of a (+/-45)(sub S), laminate, both the FMCS and the ultimate tensile strengths have a small scatter range and are sensitive to the fiber tensile strength as well as the fiber volume ratio.
Gama-Arachchige, N. S.; Baskin, J. M.; Geneve, R. L.; Baskin, C. C.
2013-01-01
Background and Aims Physical dormancy (PY)-break in some annual plant species is a two-step process controlled by two different temperature and/or moisture regimes. The thermal time model has been used to quantify PY-break in several species of Fabaceae, but not to describe stepwise PY-break. The primary aims of this study were to quantify the thermal requirement for sensitivity induction by developing a thermal time model and to propose a mechanism for stepwise PY-breaking in the winter annual Geranium carolinianum. Methods Seeds of G. carolinianum were stored under dry conditions at different constant and alternating temperatures to induce sensitivity (step I). Sensitivity induction was analysed based on the thermal time approach using the Gompertz function. The effect of temperature on step II was studied by incubating sensitive seeds at low temperatures. Scanning electron microscopy, penetrometer techniques, and different humidity levels and temperatures were used to explain the mechanism of stepwise PY-break. Key Results The base temperature (Tb) for sensitivity induction was 17·2 °C and constant for all seed fractions of the population. Thermal time for sensitivity induction during step I in the PY-breaking process agreed with the three-parameter Gompertz model. Step II (PY-break) did not agree with the thermal time concept. Q10 values for the rate of sensitivity induction and PY-break were between 2·0 and 3·5 and between 0·02 and 0·1, respectively. The force required to separate the water gap palisade layer from the sub-palisade layer was significantly reduced after sensitivity induction. Conclusions Step I and step II in PY-breaking of G. carolinianum are controlled by chemical and physical processes, respectively. This study indicates the feasibility of applying the developed thermal time model to predict or manipulate sensitivity induction in seeds with two-step PY-breaking processes. The model is the first and most detailed one yet developed for sensitivity induction in PY-break. PMID:23456728
Pedestrians' vulnerability in floodwaters: sensitivity to gender and age
NASA Astrophysics Data System (ADS)
Arrighi, Chiara; Castelli, Fabio
2017-04-01
Among the causes of fatalities during floods, the loss of stability is an aspect which has been usually investigated with conceptual models and laboratory experiments. The human body geometry has been often simplified to derive mechanical equilibrium conditions for toppling and sliding due to weight and hydrodynamic actions. Experimental activity produced water depth versus velocity diagrams showing the critical conditions for people partly immersed in floodwaters, whose scatter reflects the large variability of tested subjects (i.e. children, men and women with different physical characteristics). Nevertheless, the proposed hazard criteria based on the product number HV are not capable of distinguishing between different subjects. A dimensionless approach with a limited number of parameters and 3D numerical simulations highlight the significance of subject height and quantify the drag forces different subjects are able to withstand. From the mechanical point of view, this approach significantly reduces the experimental scatter. Differences in subjects' height are already an evidence of gender differences; however, many other parameters such as age and skeletal muscle mass may play a significant role in individual responses to floodwater actions, which can be responsible of the residual unexplained variance. In this work, a sensitivity analysis of critical instability conditions with respect to gender/age-related parameters is carried out and results and implications for flood risk management are discussed.
NASA Astrophysics Data System (ADS)
Rössler, Erik; Mattea, Carlos; Stapf, Siegfried
2015-02-01
Low field Nuclear Magnetic Resonance increases the contrast of the longitudinal relaxation rate in many biological tissues; one prominent example is hyaline articular cartilage. In order to take advantage of this increased contrast and to profile the depth-dependent variations, high resolution parameter measurements are carried out which can be of critical importance in an early diagnosis of cartilage diseases such as osteoarthritis. However, the maximum achievable spatial resolution of parameter profiles is limited by factors such as sensor geometry, sample curvature, and diffusion limitation. In this work, we report on high-resolution single-sided NMR scanner measurements with a commercial device, and quantify these limitations. The highest achievable spatial resolution on the used profiler, and the lateral dimension of the sensitive volume were determined. Since articular cartilage samples are usually bent, we also focus on averaging effects inside the horizontally aligned sensitive volume and their impact on the relaxation profiles. Taking these critical parameters into consideration, depth-dependent relaxation time profiles with the maximum achievable vertical resolution of 20 μm are discussed, and are correlated with diffusion coefficient profiles in hyaline articular cartilage in order to reconstruct T2 maps from the diffusion-weighted CPMG decays of apparent relaxation rates.
Liu, Hao; Liu, Haodong; Lapidus, Saul H.; ...
2017-06-21
Lithium transition metal oxides are an important class of electrode materials for lithium-ion batteries. Binary or ternary (transition) metal doping brings about new opportunities to improve the electrode’s performance and often leads to more complex stoichiometries and atomic structures than the archetypal LiCoO 2. Rietveld structural analyses of X-ray and neutron diffraction data is a widely-used approach for structural characterization of crystalline materials. But, different structural models and refinement approaches can lead to differing results, and some parameters can be difficult to quantify due to the inherent limitations of the data. Here, through the example of LiNi 0.8Co 0.15Al 0.05Omore » 2 (NCA), we demonstrated the sensitivity of various structural parameters in Rietveld structural analysis to different refinement approaches and structural models, and proposed an approach to reduce refinement uncertainties due to the inexact X-ray scattering factors of the constituent atoms within the lattice. Furthermore, this refinement approach was implemented for electrochemically-cycled NCA samples and yielded accurate structural parameters using only X-ray diffraction data. The present work provides the best practices for performing structural refinement of lithium transition metal oxides.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Hao; Liu, Haodong; Lapidus, Saul H.
Lithium transition metal oxides are an important class of electrode materials for lithium-ion batteries. Binary or ternary (transition) metal doping brings about new opportunities to improve the electrode’s performance and often leads to more complex stoichiometries and atomic structures than the archetypal LiCoO 2. Rietveld structural analyses of X-ray and neutron diffraction data is a widely-used approach for structural characterization of crystalline materials. But, different structural models and refinement approaches can lead to differing results, and some parameters can be difficult to quantify due to the inherent limitations of the data. Here, through the example of LiNi 0.8Co 0.15Al 0.05Omore » 2 (NCA), we demonstrated the sensitivity of various structural parameters in Rietveld structural analysis to different refinement approaches and structural models, and proposed an approach to reduce refinement uncertainties due to the inexact X-ray scattering factors of the constituent atoms within the lattice. Furthermore, this refinement approach was implemented for electrochemically-cycled NCA samples and yielded accurate structural parameters using only X-ray diffraction data. The present work provides the best practices for performing structural refinement of lithium transition metal oxides.« less
Astakhova, Luba; Firsov, Michael
2015-01-01
Purpose To experimentally identify and quantify factors responsible for the lower sensitivity of retinal cones compared to rods. Methods Electrical responses of frog rods and fish (Carassius) cones to short flashes of light were recorded using the suction pipette technique. A fast solution changer was used to apply a solution that fixed intracellular Ca2+ concentration at the prestimulus level, thereby disabling Ca2+ feedback, to the outer segment (OS). The results were analyzed with a specially designed mathematical model of phototransduction. The model included all basic processes of activation and quenching of the phototransduction cascade but omitted unnecessary mechanistic details of each step. Results Judging from the response versus intensity curves, Carassius cones were two to three orders of magnitude less sensitive than frog rods. There was a large scatter in sensitivity among individual cones, with red-sensitive cones being on average approximately two times less sensitive than green-sensitive ones. The scatter was mostly due to different signal amplification, since the kinetic parameters of the responses among cones were far less variable than sensitivity. We argue that the generally accepted definition of the biochemical amplification in phototransduction cannot be used for comparing amplification in rods and cones, since it depends on an irrelevant factor, that is, the cell’s volume. We also show that the routinely used simplified parabolic curve fitting to an initial phase of the response leads to a few-fold underestimate of the amplification. We suggest a new definition of the amplification that only includes molecular parameters of the cascade activation, and show how it can be derived from experimental data. We found that the mathematical model with unrestrained parameters can yield an excellent fit to experimental responses. However, the fits with wildly different sets of parameters can be virtually indistinguishable, and therefore cannot provide meaningful data on underlying mechanisms. Based on results of Ca2+-clamp experiments, we developed an approach to strongly constrain the values of many key parameters that set the time course and sensitivity of the photoresponse (such as the dark turnover rate of cGMP, rates of turnoffs of the photoactivated visual pigment and phosphodiesterase, and kinetics of Ca2+ feedback). We show that applying these constraints to our mathematical model enables accurate determination of the biochemical amplification in phototransduction. It appeared that, contrary to many suggestions, maximum biochemical amplification derived for “best” Carassius cones was as high as in frog rods. On the other hand, all turnoff and recovery reactions in cones proceeded approximately 10 times faster than in rods. Conclusions The main cause of the differing sensitivity of rods and cones is cones’ ability to terminate their photoresponse faster. PMID:25866462
Effects of railway track design on the expected degradation: Parametric study on energy dissipation
NASA Astrophysics Data System (ADS)
Sadri, Mehran; Steenbergen, Michaël
2018-04-01
This paper studies the effect of railway track design parameters on the expected long-term degradation of track geometry. The study assumes a geometrically perfect and straight track along with spatial invariability, except for the presence of discrete sleepers. A frequency-domain two-layer model is used of a discretely supported rail coupled with a moving unsprung mass. The susceptibility of the track to degradation is objectively quantified by calculating the mechanical energy dissipated in the substructure under a moving train axle for variations of different track parameters. Results show that, apart from the operational train speed, the ballast/substructure stiffness is the most significant parameter influencing energy dissipation. Generally, the degradation increases with the train speed and with softer substructures. However, stiff subgrades appear more sensitive to particular train velocities, in a regime which is mostly relevant for conventional trains (100-200 km/h) and less for high-speed operation, where a stiff subgrade is always favorable and can reduce the sensitivity to degradation substantially, with roughly a factor up to 7. Also railpad stiffness, sleeper distance and rail cross-sectional properties are found to have considerable effect, with higher expected degradation rates for increasing railpad stiffness, increasing sleeper distance and decreasing rail profile bending stiffness. Unsprung vehicle mass and sleeper mass have no significant influence, however, only against the background of the assumption of an idealized (invariant and straight) track. Apart from dissipated mechanical energy, the suitability of the dynamic track stiffness is explored as an engineering parameter to assess the sensitivity to degradation. It is found that this quantity is inappropriate to assess the design of an idealized track.
Impact of AMS-02 Measurements on Reducing GCR Model Uncertainties
NASA Technical Reports Server (NTRS)
Slaba, T. C.; O'Neill, P. M.; Golge, S.; Norbury, J. W.
2015-01-01
For vehicle design, shield optimization, mission planning, and astronaut risk assessment, the exposure from galactic cosmic rays (GCR) poses a significant and complex problem both in low Earth orbit and in deep space. To address this problem, various computational tools have been developed to quantify the exposure and risk in a wide range of scenarios. Generally, the tool used to describe the ambient GCR environment provides the input into subsequent computational tools and is therefore a critical component of end-to-end procedures. Over the past few years, several researchers have independently and very carefully compared some of the widely used GCR models to more rigorously characterize model differences and quantify uncertainties. All of the GCR models studied rely heavily on calibrating to available near-Earth measurements of GCR particle energy spectra, typically over restricted energy regions and short time periods. In this work, we first review recent sensitivity studies quantifying the ions and energies in the ambient GCR environment of greatest importance to exposure quantities behind shielding. Currently available measurements used to calibrate and validate GCR models are also summarized within this context. It is shown that the AMS-II measurements will fill a critically important gap in the measurement database. The emergence of AMS-II measurements also provides a unique opportunity to validate existing models against measurements that were not used to calibrate free parameters in the empirical descriptions. Discussion is given regarding rigorous approaches to implement the independent validation efforts, followed by recalibration of empirical parameters.
Verification and Validation of Residual Stresses in Bi-Material Composite Rings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Stacy Michelle; Hanson, Alexander Anthony; Briggs, Timothy
Process-induced residual stresses commonly occur in composite structures composed of dissimilar materials. These residual stresses form due to differences in the composite materials’ coefficients of thermal expansion and the shrinkage upon cure exhibited by polymer matrix materials. Depending upon the specific geometric details of the composite structure and the materials’ curing parameters, it is possible that these residual stresses could result in interlaminar delamination or fracture within the composite. Therefore, the consideration of potential residual stresses is important when designing composite parts and their manufacturing processes. However, the experimental determination of residual stresses in prototype parts can be time andmore » cost prohibitive. As an alternative to physical measurement, it is possible for computational tools to be used to quantify potential residual stresses in composite prototype parts. Therefore, the objectives of the presented work are to demonstrate a simplistic method for simulating residual stresses in composite parts, as well as the potential value of sensitivity and uncertainty quantification techniques during analyses for which material property parameters are unknown. Specifically, a simplified residual stress modeling approach, which accounts for coefficient of thermal expansion mismatch and polymer shrinkage, is implemented within the Sandia National Laboratories’ developed SIERRA/SolidMechanics code. Concurrent with the model development, two simple, bi-material structures composed of a carbon fiber/epoxy composite and aluminum, a flat plate and a cylinder, are fabricated and the residual stresses are quantified through the measurement of deformation. Then, in the process of validating the developed modeling approach with the experimental residual stress data, manufacturing process simulations of the two simple structures are developed and undergo a formal verification and validation process, including a mesh convergence study, sensitivity analysis, and uncertainty quantification. The simulations’ final results show adequate agreement with the experimental measurements, indicating the validity of a simple modeling approach, as well as a necessity for the inclusion of material parameter uncertainty in the final residual stress predictions.« less
NASA Astrophysics Data System (ADS)
Doummar, Joanna; Kassem, Assaad
2017-04-01
In the framework of a three-year PEER (USAID/NSF) funded project, flow in a Karst system in Lebanon (Assal) dominated by snow and semi arid conditions was simulated and successfully calibrated using an integrated numerical model (MIKE-She 2016) based on high resolution input data and detailed catchment characterization. Point source infiltration and fast flow pathways were simulated by a bypass function and a high conductive lens respectively. The approach consisted of identifying all the factors used in qualitative vulnerability methods (COP, EPIK, PI, DRASTIC, GOD) applied in karst systems and to assess their influence on recharge signals in the different hydrological karst compartments (Atmosphere, Unsaturated zone and Saturated zone) based on the integrated numerical model. These parameters are usually attributed different weights according to their estimated impact on Groundwater vulnerability. The aim of this work is to quantify the importance of each of these parameters and outline parameters that are not accounted for in standard methods, but that might play a role in the vulnerability of a system. The spatial distribution of the detailed evapotranspiration, infiltration, and recharge signals from atmosphere to unsaturated zone to saturated zone was compared and contrasted among different surface settings and under varying flow conditions (e.g., in varying slopes, land cover, precipitation intensity, and soil properties as well point source infiltration). Furthermore a sensitivity analysis of individual or coupled major parameters allows quantifying their impact on recharge and indirectly on vulnerability. The preliminary analysis yields a new methodology that accounts for most of the factors influencing vulnerability while refining the weights attributed to each one of them, based on a quantitative approach.
van der Ster, Björn J P; Bennis, Frank C; Delhaas, Tammo; Westerhof, Berend E; Stok, Wim J; van Lieshout, Johannes J
2017-01-01
Introduction: In the initial phase of hypovolemic shock, mean blood pressure (BP) is maintained by sympathetically mediated vasoconstriction rendering BP monitoring insensitive to detect blood loss early. Late detection can result in reduced tissue oxygenation and eventually cellular death. We hypothesized that a machine learning algorithm that interprets currently used and new hemodynamic parameters could facilitate in the detection of impending hypovolemic shock. Method: In 42 (27 female) young [mean (sd): 24 (4) years], healthy subjects central blood volume (CBV) was progressively reduced by application of -50 mmHg lower body negative pressure until the onset of pre-syncope. A support vector machine was trained to classify samples into normovolemia (class 0), initial phase of CBV reduction (class 1) or advanced CBV reduction (class 2). Nine models making use of different features were computed to compare sensitivity and specificity of different non-invasive hemodynamic derived signals. Model features included : volumetric hemodynamic parameters (stroke volume and cardiac output), BP curve dynamics, near-infrared spectroscopy determined cortical brain oxygenation, end-tidal carbon dioxide pressure, thoracic bio-impedance, and middle cerebral artery transcranial Doppler (TCD) blood flow velocity. Model performance was tested by quantifying the predictions with three methods : sensitivity and specificity, absolute error, and quantification of the log odds ratio of class 2 vs. class 0 probability estimates. Results: The combination with maximal sensitivity and specificity for classes 1 and 2 was found for the model comprising volumetric features (class 1: 0.73-0.98 and class 2: 0.56-0.96). Overall lowest model error was found for the models comprising TCD curve hemodynamics. Using probability estimates the best combination of sensitivity for class 1 (0.67) and specificity (0.87) was found for the model that contained the TCD cerebral blood flow velocity derived pulse height. The highest combination for class 2 was found for the model with the volumetric features (0.72 and 0.91). Conclusion: The most sensitive models for the detection of advanced CBV reduction comprised data that describe features from volumetric parameters and from cerebral blood flow velocity hemodynamics. In a validated model of hemorrhage in humans these parameters provide the best indication of the progression of central hypovolemia.
Conesa, Celia; FitzGerald, Richard J
2013-10-23
The kinetics and thermodynamics of the thermal inactivation of Corolase PP in two different whey protein concentrate (WPC) hydrolysates with degree of hydrolysis (DH) values of ~10 and 21%, and at different total solids (TS) levels (from 5 to 30% w/v), were studied. Inactivation studies were performed in the temperature range from 60 to 75 °C, and residual enzyme activity was quantified using the azocasein assay. The inactivation kinetics followed a first-order model. Analysis of the activation energy, thermodynamic parameters, and D and z values, demonstrated that the inactivation of Corolase PP was dependent on solution TS. The intestinal enzyme preparation was more heat sensitive at low TS. Moreover, it was also found that the enzyme was more heat sensitive in solutions at higher DH.
Malherbe, Stephanus T; Dupont, Patrick; Kant, Ilse; Ahlers, Petri; Kriel, Magdalena; Loxton, André G; Chen, Ray Y; Via, Laura E; Thienemann, Friedrich; Wilkinson, Robert J; Barry, Clifton E; Griffith-Richards, Stephanie; Ellman, Annare; Ronacher, Katharina; Winter, Jill; Walzl, Gerhard; Warwick, James M
2018-06-25
There is a growing interest in the use of 18 F-FDG PET-CT to monitor tuberculosis (TB) treatment response. However, TB causes complex and widespread pathology, which is challenging to segment and quantify in a reproducible manner. To address this, we developed a technique to standardise uptake (Z-score), segment and quantify tuberculous lung lesions on PET and CT concurrently, in order to track changes over time. We used open source tools and created a MATLAB script. The technique was optimised on a training set of five pulmonary tuberculosis (PTB) cases after standard TB therapy and 15 control patients with lesion-free lungs. We compared the proposed method to a fixed threshold (SUV > 1) and manual segmentation by two readers and piloted the technique successfully on scans of five control patients and five PTB cases (four cured and one failed treatment case), at diagnosis and after 1 and 6 months of treatment. There was a better correlation between the Z-score-based segmentation and manual segmentation than SUV > 1 and manual segmentation in terms of overall spatial overlap (measured in Dice similarity coefficient) and specificity (1 minus false positive volume fraction). However, SUV > 1 segmentation appeared more sensitive. Both the Z-score and SUV > 1 showed very low variability when measuring change over time. In addition, total glycolytic activity, calculated using segmentation by Z-score and lesion-to-background ratio, correlated well with traditional total glycolytic activity calculations. The technique quantified various PET and CT parameters, including the total glycolytic activity index, metabolic lesion volume, lesion volumes at different CT densities and combined PET and CT parameters. The quantified metrics showed a marked decrease in the cured cases, with changes already apparent at month one, but remained largely unchanged in the failed treatment case. Our technique is promising to segment and quantify the lung scans of pulmonary tuberculosis patients in a semi-automatic manner, appropriate for measuring treatment response. Further validation is required in larger cohorts.
Toddle temporal-spatial deviation index: Assessment of pediatric gait.
Cahill-Rowley, Katelyn; Rose, Jessica
2016-09-01
This research aims to develop a gait index for use in the pediatric clinic as well as research, that quantifies gait deviation in 18-22 month-old children: the Toddle Temporal-spatial Deviation Index (Toddle TDI). 81 preterm children (≤32 weeks) with very-low-birth-weights (≤1500g) and 42 full-term TD children aged 18-22 months, adjusted for prematurity, walked on a pressure-sensitive mat. Preterm children were administered the Bayley Scales of Infant Development-3rd Edition (BSID-III). Principle component analysis of TD children's temporal-spatial gait parameters quantified raw gait deviation from typical, normalized to an average(standard deviation) Toddle TDI score of 100(10), and calculated for all participants. The Toddle TDI was significantly lower for preterm versus TD children (86 vs. 100, p=0.003), and lower in preterm children with <85 vs. ≥85 BSID-III motor composite scores (66 vs. 89, p=0.004). The Toddle TDI, which by design plateaus at typical average (BSID-III gross motor 8-12), correlated with BSID-III gross motor (r=0.60, p<0.001) and not fine motor (r=0.08, p=0.65) in preterm children with gross motor scores ≤8, suggesting sensitivity to gross motor development. The Toddle TDI demonstrated sensitivity and specificity to gross motor function in very-low-birth-weight preterm children aged 18-22 months, and has been potential as an easily-administered, revealing clinical gait metric. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lin, J.-T.; Liu, Z.; Zhang, Q.; Liu, H.; Mao, J.; Zhuang, G.
2012-12-01
Errors in chemical transport models (CTMs) interpreting the relation between space-retrieved tropospheric column densities of nitrogen dioxide (NO2) and emissions of nitrogen oxides (NOx) have important consequences on the inverse modeling. They are however difficult to quantify due to lack of adequate in situ measurements, particularly over China and other developing countries. This study proposes an alternate approach for model evaluation over East China, by analyzing the sensitivity of modeled NO2 columns to errors in meteorological and chemical parameters/processes important to the nitrogen abundance. As a demonstration, it evaluates the nested version of GEOS-Chem driven by the GEOS-5 meteorology and the INTEX-B anthropogenic emissions and used with retrievals from the Ozone Monitoring Instrument (OMI) to constrain emissions of NOx. The CTM has been used extensively for such applications. Errors are examined for a comprehensive set of meteorological and chemical parameters using measurements and/or uncertainty analysis based on current knowledge. Results are exploited then for sensitivity simulations perturbing the respective parameters, as the basis of the following post-model linearized and localized first-order modification. It is found that the model meteorology likely contains errors of various magnitudes in cloud optical depth, air temperature, water vapor, boundary layer height and many other parameters. Model errors also exist in gaseous and heterogeneous reactions, aerosol optical properties and emissions of non-nitrogen species affecting the nitrogen chemistry. Modifications accounting for quantified errors in 10 selected parameters increase the NO2 columns in most areas with an average positive impact of 18% in July and 8% in January, the most important factor being modified uptake of the hydroperoxyl radical (HO2) on aerosols. This suggests a possible systematic model bias such that the top-down emissions will be overestimated by the same magnitude if the model is used for emission inversion without corrections. The modifications however cannot eliminate the large model underestimates in cities and other extremely polluted areas (particularly in the north) as compared to satellite retrievals, likely pointing to underestimates of the a priori emission inventory in these places with important implications for understanding of atmospheric chemistry and air quality. Note that these modifications are simplified and should be interpreted with caution for error apportionment.
Uncertainty-enabled design of electromagnetic reflectors with integrated shape control
NASA Astrophysics Data System (ADS)
Haque, Samiul; Kindrat, Laszlo P.; Zhang, Li; Mikheev, Vikenty; Kim, Daewa; Liu, Sijing; Chung, Jooyeon; Kuian, Mykhailo; Massad, Jordan E.; Smith, Ralph C.
2018-03-01
We implemented a computationally efficient model for a corner-supported, thin, rectangular, orthotropic polyvinylidene fluoride (PVDF) laminate membrane, actuated by a two-dimensional array of segmented electrodes. The laminate can be used as shape-controlled electromagnetic reflector and the model estimates the reflector's shape given an array of control voltages. In this paper, we describe a model to determine the shape of the laminate for a given distribution of control voltages. Then, we investigate the surface shape error and its sensitivity to the model parameters. Subsequently, we analyze the simulated deflection of the actuated bimorph using a Zernike polynomial decomposition. Finally, we provide a probabilistic description of reflector performance using statistical methods to quantify uncertainty. We make design recommendations for nominal parameter values and their tolerances based on optimization under uncertainty using multiple methods.
Nonlinear acoustics experimental characterization of microstructure evolution in Inconel 617
NASA Astrophysics Data System (ADS)
Yao, Xiaochu; Liu, Yang; Lissenden, Cliff J.
2014-02-01
Inconel 617 is a candidate material for the intermediate heat exchanger in a very high temperature reactor for the next generation nuclear power plant. This application will require the material to withstand fatigue-ratcheting interaction at temperatures up to 950°C. Therefore nondestructive evaluation and structural health monitoring are important capabilities. Acoustic nonlinearity (which is quantified in terms of a material parameter, the acoustic nonlinearity parameter, β) has been proven to be sensitive to microstructural changes in material. This research develops a robust experimental procedure to track the evolution of damage precursors in laboratory tested Inconel 617 specimens using ultrasonic bulk waves. The results from the acoustic non-linear tests are compared with stereoscope surface damage results. Therefore, the relationship between acoustic nonlinearity and microstructural evaluation can be clearly demonstrated for the specimens tested.
NASA Technical Reports Server (NTRS)
Whitlock, C. H., III
1977-01-01
Constituents with linear radiance gradients with concentration may be quantified from signals which contain nonlinear atmospheric and surface reflection effects for both homogeneous and non-homogeneous water bodies provided accurate data can be obtained and nonlinearities are constant with wavelength. Statistical parameters must be used which give an indication of bias as well as total squared error to insure that an equation with an optimum combination of bands is selected. It is concluded that the effect of error in upwelled radiance measurements is to reduce the accuracy of the least square fitting process and to increase the number of points required to obtain a satisfactory fit. The problem of obtaining a multiple regression equation that is extremely sensitive to error is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nyamekye, Charles K. A.; Weibel, Stephen C.; Bobbitt, Jonathan M.
Directional-surface-plasmon-coupled Raman scattering (directional RS) has the combined benefits of surface plasmon resonance and Raman spectroscopy, and provides the ability to measure adsorption and monolayer-sensitive chemical information. Directional RS is performed by optically coupling a 50-nm gold film to a Weierstrass prism in the Kretschmann configuration and scanning the angle of the incident laser under total internal reflection. The collected parameters on the prism side of the interface include a full surface-plasmon-polariton cone and the full Raman signal radiating from the cone as a function of incident angle. An instrument for performing directional RS and a quantitative study of themore » instrumental parameters are herein reported. To test the sensitivity and quantify the instrument parameters, self-assembled monolayers and 10 to 100-nm polymer films are studied. The signals are found to be well-modeled by two calculated angle-dependent parameters: three-dimensional finite-difference time-domain calculations of the electric field generated in the sample layer and projected to the far-field, and Fresnel calculations of the reflected light intensity. This is the first report of the quantitative study of the full surface-plasmon-polariton cone intensity, cone diameter, and directional Raman signal as a function of incident angle. We propose that directional RS is a viable alternative to surface plasmon resonance when added chemical information is beneficial.« less
Nyamekye, Charles K. A.; Weibel, Stephen C.; Bobbitt, Jonathan M.; ...
2017-09-15
Directional-surface-plasmon-coupled Raman scattering (directional RS) has the combined benefits of surface plasmon resonance and Raman spectroscopy, and provides the ability to measure adsorption and monolayer-sensitive chemical information. Directional RS is performed by optically coupling a 50-nm gold film to a Weierstrass prism in the Kretschmann configuration and scanning the angle of the incident laser under total internal reflection. The collected parameters on the prism side of the interface include a full surface-plasmon-polariton cone and the full Raman signal radiating from the cone as a function of incident angle. An instrument for performing directional RS and a quantitative study of themore » instrumental parameters are herein reported. To test the sensitivity and quantify the instrument parameters, self-assembled monolayers and 10 to 100-nm polymer films are studied. The signals are found to be well-modeled by two calculated angle-dependent parameters: three-dimensional finite-difference time-domain calculations of the electric field generated in the sample layer and projected to the far-field, and Fresnel calculations of the reflected light intensity. This is the first report of the quantitative study of the full surface-plasmon-polariton cone intensity, cone diameter, and directional Raman signal as a function of incident angle. We propose that directional RS is a viable alternative to surface plasmon resonance when added chemical information is beneficial.« less
Robust optimal design of diffusion-weighted magnetic resonance experiments for skin microcirculation
NASA Astrophysics Data System (ADS)
Choi, J.; Raguin, L. G.
2010-10-01
Skin microcirculation plays an important role in several diseases including chronic venous insufficiency and diabetes. Magnetic resonance (MR) has the potential to provide quantitative information and a better penetration depth compared with other non-invasive methods such as laser Doppler flowmetry or optical coherence tomography. The continuous progress in hardware resulting in higher sensitivity must be coupled with advances in data acquisition schemes. In this article, we first introduce a physical model for quantifying skin microcirculation using diffusion-weighted MR (DWMR) based on an effective dispersion model for skin leading to a q-space model of the DWMR complex signal, and then design the corresponding robust optimal experiments. The resulting robust optimal DWMR protocols improve the worst-case quality of parameter estimates using nonlinear least squares optimization by exploiting available a priori knowledge of model parameters. Hence, our approach optimizes the gradient strengths and directions used in DWMR experiments to robustly minimize the size of the parameter estimation error with respect to model parameter uncertainty. Numerical evaluations are presented to demonstrate the effectiveness of our approach as compared to conventional DWMR protocols.
Quantifying Adoption Rates and Energy Savings Over Time for Advanced Manufacturing Technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanes, Rebecca; Carpenter Petri, Alberta C; Riddle, Matt
Energy-efficient manufacturing technologies can reduce energy consumption and lower operating costs for an individual manufacturing facility, but increased process complexity and the resulting risk of disruption means that manufacturers may be reluctant to adopt such technologies. In order to quantify potential energy savings at scales larger than a single facility, it is necessary to account for how quickly and how widely the technology will be adopted by manufacturers. This work develops a methodology for estimating energy-efficient manufacturing technology adoption rates using quantitative, objectively measurable technology characteristics, including energetic, economic and technical criteria. Twelve technology characteristics are considered, and each characteristicmore » is assigned an importance weight that reflects its impact on the overall technology adoption rate. Technology characteristic data and importance weights are used to calculate the adoption score, a number between 0 and 1 that represents how quickly the technology is likely to be adopted. The adoption score is then used to estimate parameters for the Bass diffusion curve, which quantifies the change in the number of new technology adopters in a population over time. Finally, energy savings at the sector level are calculated over time by multiplying the number of new technology adopters at each time step with the technology's facility-level energy savings. The proposed methodology will be applied to five state-of-the-art energy-efficient technologies in the carbon fiber composites sector, with technology data obtained from the Department of Energy's 2016 bandwidth study. Because the importance weights used in estimating the Bass curve parameters are subjective, a sensitivity analysis will be performed on the weights to obtain a range of parameters for each technology. The potential energy savings for each technology and the rate at which each technology is adopted in the sector are quantified and used to identify the technologies which offer the greatest cumulative sector-level energy savings over a period of 20 years. Preliminary analysis indicates that relatively simple technologies, such as efficient furnaces, will be adopted more quickly and result in greater cumulative energy savings compared to more complex technologies that require process retrofitting, such as advanced control systems.« less
Rheology measurement for on-line monitoring of filaments proliferation in activated sludge tanks.
Tixier, N; Guibaud, G; Baudu, M
2004-01-01
Rheological behaviour of filamentous sludges originated from activated sludge reactors was studied. Filamentous bulking was detected via a hysteresis loop developed from rheograms resulting from increasing-decreasing shear rates. The rheological parameter reduced hysteresis area (rHa), corresponding to the loop area developed by rheograms was used to quantify filamentous bulking. Application to the evolution of several bulkings was carried out and it was shown that filaments proliferation and disappearance were correlated with, respectively, the increasing and decreasing of the value of the parameter rHa. In parallel with rheological measurement, parameters used for the study of sludge quality, such as sludge volume index (SVI) and settling initial flow (F0), were determined for comparison during the evolution of several bulkings. It was shown that rHa was more sensitive to the appearance of filamentous bulking than SVI and F0, therefore it was concluded that detection of filamentous bulking can be shown from rHa.
Event-scale power law recession analysis: quantifying methodological uncertainty
NASA Astrophysics Data System (ADS)
Dralle, David N.; Karst, Nathaniel J.; Charalampous, Kyriakos; Veenstra, Andrew; Thompson, Sally E.
2017-01-01
The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power-law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices. Considering study results, we recommend a combination of four key methodological decisions to maximize the quality of fitted recession curves, and to minimize bias in the related populations of fitted recession parameters.
Adjoint-Based Uncertainty Quantification with MCNP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seifried, Jeffrey E.
2011-09-01
This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence inmore » the simulation is acquired.« less
Heberer, Kent; Fowler, Eileen; Staudt, Loretta; Sienko, Susan; Buckon, Cathleen E; Bagley, Anita; Sison-Williamson, Mitell; McDonald, Craig M; Sussman, Michael D
2016-07-01
Duchenne muscular dystrophy (DMD) is an X-linked genetic neuromuscular disorder characterized by progressive proximal to distal muscle weakness. The success of randomized clinical trials for novel therapeutics depends on outcome measurements that are sensitive to change. As the development of motor skills may lead to functional improvements in young boys with DMD, their inclusion may potentially confound clinical trials. Three-dimensional gait analysis is an under-utilized approach that can quantify joint moments and powers, which reflect functional muscle strength. In this study, gait kinetics, kinematics, spatial-temporal parameters, and timed functional tests were quantified over a one-year period for 21 boys between 4 and 8 years old who were enrolled in a multisite natural history study. At baseline, hip moments and powers were inadequate. Between the two visits, 12 boys began a corticosteroid regimen (mean duration 10.8±2.4 months) while 9 boys remained steroid-naïve. Significant between-group differences favoring steroid use were found for primary kinetic outcomes (peak hip extensor moments (p=.007), duration of hip extensor moments (p=.007), peak hip power generation (p=.028)), and spatial-temporal parameters (walking speed (p=.016) and cadence (p=.021)). Significant between-group differences were not found for kinematics or timed functional tests with the exception of the 10m walk test (p=.03), which improves in typically developing children within this age range. These results indicate that hip joint kinetics can be used to identify weakness in young boys with DMD and are sensitive to corticosteroid intervention. Inclusion of gait analysis may enhance detection of a treatment effect in clinical trials particularly for young boys with more preserved muscle function. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bell, A.; Tang, G.; Yang, P.; Wu, D.
2017-12-01
Due to their high spatial and temporal coverage, cirrus clouds have a profound role in regulating the Earth's energy budget. Variability of their radiative, geometric, and microphysical properties can pose significant uncertainties in global climate model simulations if not adequately constrained. Thus, the development of retrieval methodologies able to accurately retrieve ice cloud properties and present associated uncertainties is essential. The effectiveness of cirrus cloud retrievals relies on accurate a priori understanding of ice radiative properties, as well as the current state of the atmosphere. Current studies have implemented information content theory analyses prior to retrievals to quantify the amount of information that should be expected on parameters to be retrieved, as well as the relative contribution of information provided by certain measurement channels. Through this analysis, retrieval algorithms can be designed in a way to maximize the information in measurements, and therefore ensure enough information is present to retrieve ice cloud properties. In this study, we present such an information content analysis to quantify the amount of information to be expected in retrievals of cirrus ice water path and particle effective diameter using sub-millimeter and thermal infrared radiometry. Preliminary results show these bands to be sensitive to changes in ice water path and effective diameter, and thus lend confidence their ability to simultaneously retrieve these parameters. Further quantification of sensitivity and the information provided from these bands can then be used to design and optimal retrieval scheme. While this information content analysis is employed on a theoretical retrieval combining simulated radiance measurements, the methodology could in general be applicable to any instrument or retrieval approach.
NASA Astrophysics Data System (ADS)
Duarte, Janaína; Pacheco, Marcos T. T.; Villaverde, Antonio Balbin; Machado, Rosangela Z.; Zângaro, Renato A.; Silveira, Landulfo
2010-07-01
Toxoplasmosis is an important zoonosis in public health because domestic cats are the main agents responsible for the transmission of this disease in Brazil. We investigate a method for diagnosing toxoplasmosis based on Raman spectroscopy. Dispersive near-infrared Raman spectra are used to quantify anti-Toxoplasma gondii (IgG) antibodies in blood sera from domestic cats. An 830-nm laser is used for sample excitation, and a dispersive spectrometer is used to detect the Raman scattering. A serological test is performed in all serum samples by the enzyme-linked immunosorbent assay (ELISA) for validation. Raman spectra are taken from 59 blood serum samples and a quantification model is implemented based on partial least squares (PLS) to quantify the sample's serology by Raman spectra compared to the results provided by the ELISA test. Based on the serological values provided by the Raman/PLS model, diagnostic parameters such as sensitivity, specificity, accuracy, positive prediction values, and negative prediction values are calculated to discriminate negative from positive samples, obtaining 100, 80, 90, 83.3, and 100%, respectively. Raman spectroscopy, associated with the PLS, is promising as a serological assay for toxoplasmosis, enabling fast and sensitive diagnosis.
Modelling breast cancer tumour growth for a stable disease population.
Isheden, Gabriel; Humphreys, Keith
2017-01-01
Statistical models of breast cancer tumour progression have been used to further our knowledge of the natural history of breast cancer, to evaluate mammography screening in terms of mortality, to estimate overdiagnosis, and to estimate the impact of lead-time bias when comparing survival times between screen detected cancers and cancers found outside of screening programs. Multi-state Markov models have been widely used, but several research groups have proposed other modelling frameworks based on specifying an underlying biological continuous tumour growth process. These continuous models offer some advantages over multi-state models and have been used, for example, to quantify screening sensitivity in terms of mammographic density, and to quantify the effect of body size covariates on tumour growth and time to symptomatic detection. As of yet, however, the continuous tumour growth models are not sufficiently developed and require extensive computing to obtain parameter estimates. In this article, we provide a detailed description of the underlying assumptions of the continuous tumour growth model, derive new theoretical results for the model, and show how these results may help the development of this modelling framework. In illustrating the approach, we develop a model for mammography screening sensitivity, using a sample of 1901 post-menopausal women diagnosed with invasive breast cancer.
Liu, Jiao; Tian, Ji; Li, Jin; Azietaku, John Teye; Zhang, Bo-Li; Gao, Xiu-Mei; Chang, Yan-Xu
2016-07-01
An in-capillary 2, 2-diphenyl-1-picrylhydrazyl (DPPH)-CE-the DAD (in-capillary DPPH-CE-DAD) combined with reversed-electrode polarity stacking mode has been developed to screen and quantify the active antioxidant components of Cuscuta chinensis Lam. The operation parameters were optimized with regard to the pH and concentration of buffer solution, SDS, β-CDs, organic modifier, as well as separation voltage and temperature. Six antioxidants including chlorogenic acid, p-coumaric acid, rutin, hyperin, isoquercitrin, and astragalin were screened and the total antioxidant activity of the complex matrix was successfully evaluated based on the decreased peak area of DPPH by the established DPPH-CE-DAD method. Sensitivity was enhanced under reversed-electrode polarity stacking mode and 10- to 31-fold of magnitude improvement in detection sensitivity for each analyte was attained. The results demonstrated that the newly established in-capillary DPPH-CE-DAD method combined with reversed-electrode polarity stacking mode could integrate sample concentration, the oxidizing reaction, separation, and detection into one capillary to fully automate the system. It was considered a suitable technique for the separation, screening, and determination of trace antioxidants in natural products. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Alderman, Phillip D.; Stanfill, Bryan
2016-10-06
Recent international efforts have brought renewed emphasis on the comparison of different agricultural systems models. Thus far, analysis of model-ensemble simulated results has not clearly differentiated between ensemble prediction uncertainties due to model structural differences per se and those due to parameter value uncertainties. Additionally, despite increasing use of Bayesian parameter estimation approaches with field-scale crop models, inadequate attention has been given to the full posterior distributions for estimated parameters. The objectives of this study were to quantify the impact of parameter value uncertainty on prediction uncertainty for modeling spring wheat phenology using Bayesian analysis and to assess the relativemore » contributions of model-structure-driven and parameter-value-driven uncertainty to overall prediction uncertainty. This study used a random walk Metropolis algorithm to estimate parameters for 30 spring wheat genotypes using nine phenology models based on multi-location trial data for days to heading and days to maturity. Across all cases, parameter-driven uncertainty accounted for between 19 and 52% of predictive uncertainty, while model-structure-driven uncertainty accounted for between 12 and 64%. Here, this study demonstrated the importance of quantifying both model-structure- and parameter-value-driven uncertainty when assessing overall prediction uncertainty in modeling spring wheat phenology. More generally, Bayesian parameter estimation provided a useful framework for quantifying and analyzing sources of prediction uncertainty.« less
Quantify fluid saturation in fractures by light transmission technique and its application
NASA Astrophysics Data System (ADS)
Ye, S.; Zhang, Y.; Wu, J.
2016-12-01
The Dense Non-Aqueous Phase Liquids (DNAPLs) migration in transparent and rough fractures with variable aperture was studied experimentally using a light transmission technique. The migration of trichloroethylene (TCE) in variable-aperture fractures (20 cm wide x 32.5 cm high) showed that a TCE blob moved downward with snap-off events in four packs with apertures from 100 μm to 1000 μm, and that the pattern presented a single and tortuous cluster with many fingers in a pack with two apertures of 100 μm and 500 μm. The variable apertures in the fractures were measured by light transmission. A light intensity-saturation (LIS) model based on light transmission was used to quantify DNAPL saturation in the fracture system. Known volumes of TCE, were added to the chamber and these amounts were compared to the results obtained by LIS model. Strong correlation existed between results obtained based on LIS model and the known volumes of T CE. Sensitivity analysis showed that the aperture was more sensitive than parameter C2 of LIS model. LIS model was also used to measure dyed TCE saturation in air sparging experiment. The results showed that the distribution and amount of TCE significantly influenced the efficient of air sparging. The method developed here give a way to quantify fluid saturation in two-phase system in fractured medium, and provide a non-destructive, non-intrusive tool to investigate changes in DNAPL architecture and flow characteristics in laboratory experiments. Keywords: light transmission, fluid saturation, fracture, variable aperture AcknowledgementsFunding for this research from NSFC Project No. 41472212.
Uncertainty quantification for environmental models
Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming
2012-01-01
Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10]. There are also bootstrapping and cross-validation approaches.Sometimes analyses are conducted using surrogate models [12]. The availability of so many options can be confusing. Categorizing methods based on fundamental questions assists in communicating the essential results of uncertainty analyses to stakeholders. Such questions can focus on model adequacy (e.g., How well does the model reproduce observed system characteristics and dynamics?) and sensitivity analysis (e.g., What parameters can be estimated with available data? What observations are important to parameters and predictions? What parameters are important to predictions?), as well as on the uncertainty quantification (e.g., How accurate and precise are the predictions?). The methods can also be classified by the number of model runs required: few (10s to 1000s) or many (10,000s to 1,000,000s). Of the methods listed above, the most computationally frugal are generally those based on local derivatives; MCMC methods tend to be among the most computationally demanding. Surrogate models (emulators)do not necessarily produce computational frugality because many runs of the full model are generally needed to create a meaningful surrogate model. With this categorization, we can, in general, address all the fundamental questions mentioned above using either computationally frugal or demanding methods. Model development and analysis can thus be conducted consistently using either computation-ally frugal or demanding methods; alternatively, different fundamental questions can be addressed using methods that require different levels of effort. Based on this perspective, we pose the question: Can computationally frugal methods be useful companions to computationally demanding meth-ods? The reliability of computationally frugal methods generally depends on the model being reasonably linear, which usually means smooth nonlin-earities and the assumption of Gaussian errors; both tend to be more valid with more linear
NASA Astrophysics Data System (ADS)
Zhou, Y.; Gu, H.; Williams, C. A.
2017-12-01
Results from terrestrial carbon cycle models have multiple sources of uncertainty, each with its behavior and range. Their relative importance and how they combine has received little attention. This study investigates how various sources of uncertainty propagate, temporally and spatially, in CASA-Disturbance (CASA-D). CASA-D simulates the impact of climatic forcing and disturbance legacies on forest carbon dynamics with the following steps. Firstly, we infer annual growth and mortality rates from measured biomass stocks (FIA) over time and disturbance (e.g., fire, harvest, bark beetle) to represent annual post-disturbance carbon fluxes trajectories across forest types and site productivity settings. Then, annual carbon fluxes are estimated from these trajectories by using time since disturbance which is inferred from biomass (NBCD 2000) and disturbance maps (NAFD, MTBS and ADS). Finally, we apply monthly climatic scalars derived from default CASA to temporally distribute annual carbon fluxes to each month. This study assesses carbon flux uncertainty from two sources: driving data including climatic and forest biomass inputs, and three most sensitive parameters in CASA-D including maximum light use efficiency, temperature sensitivity of soil respiration (Q10) and optimum temperature identified by using EFAST (Extended Fourier Amplitude Sensitivity Testing). We quantify model uncertainties from each, and report their relative importance in estimating forest carbon sink/source in southeast United States from 2003 to 2010.
NASA Technical Reports Server (NTRS)
Shih, Ann T.; Lo, Yunnhon; Ward, Natalie C.
2010-01-01
Quantifying the probability of significant launch vehicle failure scenarios for a given design, while still in the design process, is critical to mission success and to the safety of the astronauts. Probabilistic risk assessment (PRA) is chosen from many system safety and reliability tools to verify the loss of mission (LOM) and loss of crew (LOC) requirements set by the NASA Program Office. To support the integrated vehicle PRA, probabilistic design analysis (PDA) models are developed by using vehicle design and operation data to better quantify failure probabilities and to better understand the characteristics of a failure and its outcome. This PDA approach uses a physics-based model to describe the system behavior and response for a given failure scenario. Each driving parameter in the model is treated as a random variable with a distribution function. Monte Carlo simulation is used to perform probabilistic calculations to statistically obtain the failure probability. Sensitivity analyses are performed to show how input parameters affect the predicted failure probability, providing insight for potential design improvements to mitigate the risk. The paper discusses the application of the PDA approach in determining the probability of failure for two scenarios from the NASA Ares I project
Boison, Joe O; Asea, Philip A; Matus, Johanna L
2012-08-01
A new and sensitive multi-residue method (MRM) with detection by LC-MS/MS was developed and validated for the screening, determination, and confirmation of residues of 7 nitroimidazoles and 3 of their metabolites in turkey muscle tissues at concentrations ≥ 0.05 ng/g. The compounds were extracted into a solvent with an alkali salt. Sample clean-up and concentration was then done by solid-phase extraction (SPE) and the compounds were quantified by liquid chromatography-tandem mass spectrometry (LC-MS/MS). The characteristic parameters including repeatability, selectivity, ruggedness, stability, level of quantification, and level of confirmation for the new method were determined. Method validation was achieved by independent verification of the parameters measured during method characterization. The seven nitroimidazoles included are metronidazole (MTZ), ronidazole (RNZ), dimetridazole (DMZ), tinidazole (TNZ), ornidazole (ONZ), ipronidazole (IPR), and carnidazole (CNZ). It was discovered during the single laboratory validation of the method that five of the seven nitroimidazoles (i.e. metronidazole, dimetridazole, tinidazole, ornidazole and ipronidazole) and the 3 metabolites (1-(2-hydroxyethyl)-2-hydroxymethyl-5-nitroimidazole (MTZ-OH), 2-hydroxymethyl-1-methyl-5-nitroimidazole (HMMNI, the common metabolite of ronidazole and dimetridazole), and 1-methyl-2-(2'-hydroxyisopropyl)-5-nitroimidazole (IPR-OH) included in this study could be detected, confirmed, and quantified accurately whereas RNZ and CNZ could only be detected and confirmed but not accurately quantified. © Her Majesty the Queen in Right of Canada as Represented by the Minister of Agriculture and Agri-food Canada 2012.
[Stereovideographic evaluation of the postural geometry of healthy and scoliotic patients].
De la Huerta, F; Leroux, M A; Zabjek, K F; Coillard, C; Rivard, C H
1998-01-01
Idiopathic scoliosis principally characterised by a deformation of the vertebral column can also be associated to postural abnormalities. The validity and reliability of current quantitative postural evaluations has not been thoroughly documented, frequently limited by a two dimensional view of the patient, and do not include the whole posture of the patient. The purpose of this study is to 1) quantify within and between-session reliability of a stereovideographic Postural Geometry (PG) evaluation and 2) to investigate the sensitivity of this technique for the postural evaluation of scoliosis patients. The PG of 14 control subjects and 9 untreated scoliosis patients were evaluated with 5 repeat trials, on two occasions. Postural geometry parameters that describe the position and orientation of the pelvis, trunk, scapular girdle and head were calculated based on the 3-dimensional co-ordinates of anatomical landmarks. The mean between and within-session variability across all parameters were 12.5 mm, 2.8 degrees and 5.4 mm and 1.4 degrees respectively. The patient group was heterogeneous with some noted pathological characteristics. This global stereovideographic postural geometry evaluation appears to demonstrate sufficient reliability and sensitivity to follow-up on the posture of scoliosis patients.
Tuning into Scorpius X-1: adapting a continuous gravitational-wave search for a known binary system
NASA Astrophysics Data System (ADS)
Meadors, Grant David; Goetz, Evan; Riles, Keith
2016-05-01
We describe how the TwoSpect data analysis method for continuous gravitational waves (GWs) has been tuned for directed sources such as the low-mass X-ray binary (LMXB), Scorpius X-1 (Sco X-1). A comparison of five search algorithms generated simulations of the orbital and GW parameters of Sco X-1. Whereas that comparison focused on relative performance, here the simulations help quantify the sensitivity enhancement and parameter estimation abilities of this directed method, derived from an all-sky search for unknown sources, using doubly Fourier-transformed data. Sensitivity is shown to be enhanced when the source sky location and period are known, because we can run a fully templated search, bypassing the all-sky hierarchical stage using an incoherent harmonic sum. The GW strain and frequency, as well as the projected semi-major axis of the binary system, are recovered and uncertainty estimated, for simulated signals that are detected. Upper limits for GW strain are set for undetected signals. Applications to future GW observatory data are discussed. Robust against spin-wandering and computationally tractable despite an unknown frequency, this directed search is an important new tool for finding gravitational signals from LMXBs.
Multispectral diffuse optical tomography of finger joints
NASA Astrophysics Data System (ADS)
Lighter, Daniel; Filer, Andrew; Dehghani, Hamid
2017-07-01
Rheumatoid arthritis (RA) is a chronic inflammatory disease characterized by synovial inflammation. The current treatment paradigm for earlier, more aggressive therapy places importance on development of functional imaging modalities, capable of quantifying joint changes at the earliest stages. Diffuse optical tomography (DOT) has shown great promise in this regard, due to its cheap, non-invasive, non-ionizing and high contrast nature. Underlying pathological activity in afflicted joints leads to altered optical properties of the synovial region, with absorption and scattering increasing. Previous studies have used these optical changes as features for classifying diseased joints from healthy. Non-tomographic, single wavelength, continuous wave (CW) measurements of trans-illuminated joints have previously reported achieving this with specificity and sensitivity in the range 80 - 90% [1]. A single wavelength, frequency domain DOT system, combined with machine learning techniques, has been shown to achieve sensitivity and specificity in the range of 93.8 - 100% [2]. A CW system is presented here which collects data at 5 wavelengths, enabling reconstruction of pathophysiological parameters such as oxygenation and total hemoglobin, with the aim of identifying localized hypoxia and angiogenesis associated with inflammation in RA joints. These initial studies focus on establishing levels of variation in recovered parameters from images of healthy controls.
Rössler, Erik; Mattea, Carlos; Stapf, Siegfried
2015-02-01
Low field Nuclear Magnetic Resonance increases the contrast of the longitudinal relaxation rate in many biological tissues; one prominent example is hyaline articular cartilage. In order to take advantage of this increased contrast and to profile the depth-dependent variations, high resolution parameter measurements are carried out which can be of critical importance in an early diagnosis of cartilage diseases such as osteoarthritis. However, the maximum achievable spatial resolution of parameter profiles is limited by factors such as sensor geometry, sample curvature, and diffusion limitation. In this work, we report on high-resolution single-sided NMR scanner measurements with a commercial device, and quantify these limitations. The highest achievable spatial resolution on the used profiler, and the lateral dimension of the sensitive volume were determined. Since articular cartilage samples are usually bent, we also focus on averaging effects inside the horizontally aligned sensitive volume and their impact on the relaxation profiles. Taking these critical parameters into consideration, depth-dependent relaxation time profiles with the maximum achievable vertical resolution of 20 μm are discussed, and are correlated with diffusion coefficient profiles in hyaline articular cartilage in order to reconstruct T(2) maps from the diffusion-weighted CPMG decays of apparent relaxation rates. Copyright © 2014 Elsevier Inc. All rights reserved.
Non-lethal control of the cariogenic potential of an agent-based model for dental plaque.
Head, David A; Marsh, Phil D; Devine, Deirdre A
2014-01-01
Dental caries or tooth decay is a prevalent global disease whose causative agent is the oral biofilm known as plaque. According to the ecological plaque hypothesis, this biofilm becomes pathogenic when external challenges drive it towards a state with a high proportion of acid-producing bacteria. Determining which factors control biofilm composition is therefore desirable when developing novel clinical treatments to combat caries, but is also challenging due to the system complexity and the existence of multiple bacterial species performing similar functions. Here we employ agent-based mathematical modelling to simulate a biofilm consisting of two competing, distinct types of bacterial populations, each parameterised by their nutrient uptake and aciduricity, periodically subjected to an acid challenge resulting from the metabolism of dietary carbohydrates. It was found that one population was progressively eliminated from the system to give either a benign or a pathogenic biofilm, with a tipping point between these two fates depending on a multiplicity of factors relating to microbial physiology and biofilm geometry. Parameter sensitivity was quantified by individually varying the model parameters against putative experimental measures, suggesting non-lethal interventions that can favourably modulate biofilm composition. We discuss how the same parameter sensitivity data can be used to guide the design of validation experiments, and argue for the benefits of in silico modelling in providing an additional predictive capability upstream from in vitro experiments.
Vorburger, Robert S; Habeck, Christian G; Narkhede, Atul; Guzman, Vanessa A; Manly, Jennifer J; Brickman, Adam M
2016-01-01
Diffusion tensor imaging suffers from an intrinsic low signal-to-noise ratio. Bootstrap algorithms have been introduced to provide a non-parametric method to estimate the uncertainty of the measured diffusion parameters. To quantify the variability of the principal diffusion direction, bootstrap-derived metrics such as the cone of uncertainty have been proposed. However, bootstrap-derived metrics are not independent of the underlying diffusion profile. A higher mean diffusivity causes a smaller signal-to-noise ratio and, thus, increases the measurement uncertainty. Moreover, the goodness of the tensor model, which relies strongly on the complexity of the underlying diffusion profile, influences bootstrap-derived metrics as well. The presented simulations clearly depict the cone of uncertainty as a function of the underlying diffusion profile. Since the relationship of the cone of uncertainty and common diffusion parameters, such as the mean diffusivity and the fractional anisotropy, is not linear, the cone of uncertainty has a different sensitivity. In vivo analysis of the fornix reveals the cone of uncertainty to be a predictor of memory function among older adults. No significant correlation occurs with the common diffusion parameters. The present work not only demonstrates the cone of uncertainty as a function of the actual diffusion profile, but also discloses the cone of uncertainty as a sensitive predictor of memory function. Future studies should incorporate bootstrap-derived metrics to provide more comprehensive analysis.
Martin, Bryn A.; Kalata, Wojciech; Shaffer, Nicholas; Fischer, Paul; Luciano, Mark; Loth, Francis
2013-01-01
Elevated or reduced velocity of cerebrospinal fluid (CSF) at the craniovertebral junction (CVJ) has been associated with type I Chiari malformation (CMI). Thus, quantification of hydrodynamic parameters that describe the CSF dynamics could help assess disease severity and surgical outcome. In this study, we describe the methodology to quantify CSF hydrodynamic parameters near the CVJ and upper cervical spine utilizing subject-specific computational fluid dynamics (CFD) simulations based on in vivo MRI measurements of flow and geometry. Hydrodynamic parameters were computed for a healthy subject and two CMI patients both pre- and post-decompression surgery to determine the differences between cases. For the first time, we present the methods to quantify longitudinal impedance (LI) to CSF motion, a subject-specific hydrodynamic parameter that may have value to help quantify the CSF flow blockage severity in CMI. In addition, the following hydrodynamic parameters were quantified for each case: maximum velocity in systole and diastole, Reynolds and Womersley number, and peak pressure drop during the CSF cardiac flow cycle. The following geometric parameters were quantified: cross-sectional area and hydraulic diameter of the spinal subarachnoid space (SAS). The mean values of the geometric parameters increased post-surgically for the CMI models, but remained smaller than the healthy volunteer. All hydrodynamic parameters, except pressure drop, decreased post-surgically for the CMI patients, but remained greater than in the healthy case. Peak pressure drop alterations were mixed. To our knowledge this study represents the first subject-specific CFD simulation of CMI decompression surgery and quantification of LI in the CSF space. Further study in a larger patient and control group is needed to determine if the presented geometric and/or hydrodynamic parameters are helpful for surgical planning. PMID:24130704
Lu, R; Xiao, Y
2017-07-18
Objective: To evaluate the clinical value of ultrasonic elastography and ultrasonography comprehensive scoring method in the diagnosis of cervical lesions. Methods: A total of 116 patients were selected from the Department of Gynecology of the first hospital affiliated with Central South University from March 2014 to September 2015.All of the lesions were preoperatively examined by Doppler Ultrasound and elastography.The elasticity score was determined by a 5-point scoring method. Calculation of the strain ratio was based on a comparison of the average strain measured in the lesion with the adjacent tissue of the same depth, size, and shape.All these ultrasonic parameters were quantified, added, and arrived at ultrasonography comprehensive scores.To use surgical pathology as the gold standard, the sensitivity, specificity, accuracy of Doppler Ultrasound, elasticity score and strain ratio methods and ultrasonography comprehensive scoring method were comparatively analyzed. Results: (1) The sensitivity, specificity, and accuracy of Doppler Ultrasound in diagnosing cervical lesions were 82.89% (63/76), 85.0% (34/40), and 83.62% (97/116), respectively.(2) The sensitivity, specificity, and accuracy of the elasticity score method were 77.63% (59/76), 82.5% (33/40), and 79.31% (92/116), respectively; the sensitivity, specificity, and accuracy of the strain ratio measure method were 84.21% (64/76), 87.5% (35/40), and 85.34% (99/116), respectively.(3) The sensitivity, specificity, and accuracy of ultrasonography comprehensive scoring method were 90.79% (69/76), 92.5% (37/40), and 91.38% (106/116), respectively. Conclusion: (1) It was obvious that ultrasonic elastography had certain diagnostic value in cervical lesions. Strain ratio measurement can be more objective than elasticity score method.(2) The combined application of ultrasonography comprehensive scoring method, ultrasonic elastography and conventional sonography was more accurate than single parameter.
NASA Astrophysics Data System (ADS)
Malczewski, Jacek
2006-12-01
The objective of this paper is to incorporate the concept of fuzzy (linguistic) quantifiers into the GIS-based land suitability analysis via ordered weighted averaging (OWA). OWA is a multicriteria evaluation procedure (or combination operator). The nature of the OWA procedure depends on some parameters, which can be specified by means of fuzzy (linguistic) quantifiers. By changing the parameters, OWA can generate a wide range of decision strategies or scenarios. The quantifier-guided OWA procedure is illustrated using land-use suitability analysis in a region of Mexico.
NASA Astrophysics Data System (ADS)
Elsner, Ann E.; Burns, Stephen A.; Weiter, John J.
2002-01-01
We measured changes to cone photoreceptors in patients with early age-related macular degeneration. The data of 53 patients were compared with normative data for color matching measurements of long- and middle-wavelength-sensitive cones in the central macula. A four-parameter model quantified cone photopigment optical density and kinetics. Cone photopigment optical density was on average less for the patients than for normal subjects and was uncorrelated with visual acuity. More light was needed to reduce the photopigment density by 50% in the steady state for patients. These results imply that cone photopigment optical density is reduced by factors other than slowed kinetics.
Ultrasound finite element simulation sensitivity to anisotropic titanium microstructures
NASA Astrophysics Data System (ADS)
Freed, Shaun; Blackshire, James L.; Na, Jeong K.
2016-02-01
Analytical wave models are inadequate to describe complex metallic microstructure interactions especially for near field anisotropic property effects and through geometric features smaller than the wavelength. In contrast, finite element ultrasound simulations inherently capture microstructure influences due to their reliance on material definitions rather than wave descriptions. To better understand and quantify heterogeneous crystal orientation effects to ultrasonic wave propagation, a finite element modeling case study has been performed with anisotropic titanium grain structures. A parameterized model has been developed utilizing anisotropic spheres within a bulk material. The resulting wave parameters are analyzed as functions of both wavelength and sphere to bulk crystal mismatch angle.
NASA Astrophysics Data System (ADS)
Lee, Lindsay; Mann, Graham; Carslaw, Ken; Toohey, Matthew; Aquila, Valentina
2016-04-01
The World Climate Research Program's SPARC initiative has a new international activity "Stratospheric Sulphur and its Role in Climate" (SSiRC) to better understand changes in stratospheric aerosol and precursor gaseous sulphur species. One component of SSiRC involves an intercomparison "ISA-MIP" of composition-climate models that simulate the stratospheric aerosol layer interactively. Within PoEMS each modelling group will run a "perturbed physics ensemble" (PPE) of interactive stratospheric aerosol (ISA) simulations of the Pinatubo eruption, varying several uncertain parameters associated with the eruption's SO2 emissions and model processes. A powerful new technique to quantify and attribute sources of uncertainty in complex global models is described by Lee et al. (2011, ACP). The analysis uses Gaussian emulation to derive a probability density function (pdf) of predicted quantities, essentially interpolating the PPE results in multi-dimensional parameter space. Once trained on the ensemble, a Monte Carlo simulation with the fast Gaussian emulator enabling a full variance-based sensitivity analysis. The approach has already been used effectively by Carslaw et al., (2013, Nature) to quantify the uncertainty in the cloud albedo effect forcing from a 3D global aerosol-microphysics model allowing to compare the sensitivy of different predicted quantities to uncertainties in natural and anthropogenic emissions types, and structural parameters in the models. Within ISA-MIP, each group will carry out a PPE of runs, with the subsequent analysis with the emulator assessing the uncertainty in the volcanic forcings predicted by each model. In this poster presentation we will give an outline of the "PoEMS" analysis, describing the uncertain parameters to be varied and the relevance to further understanding differences identified in previous international stratospheric aerosol assessments.
Kim, Hyun-Sun; Yi, Seung-Muk
2009-01-01
Quantifying methane emission from landfills is important to evaluating measures for reduction of greenhouse gas (GHG) emissions. To quantify GHG emissions and identify sensitive parameters for their measurement, a new assessment approach consisting of six different scenarios was developed using Tier 1 (mass balance method) and Tier 2 (the first-order decay method) methodologies for GHG estimation from landfills, suggested by the Intergovernmental Panel on Climate Change (IPCC). Methane emissions using Tier 1 correspond to trends in disposed waste amount, whereas emissions from Tier 2 gradually increase as disposed waste decomposes over time. The results indicate that the amount of disposed waste and the decay rate for anaerobic decomposition were decisive parameters for emission estimation using Tier 1 and Tier 2. As for the different scenarios, methane emissions were highest under Scope 1 (scenarios I and II), in which all landfills in Korea were regarded as one landfill. Methane emissions under scenarios III, IV, and V, which separated the dissimilated fraction of degradable organic carbon (DOC(F)) by waste type and/or revised the methane correction factor (MCF) by waste layer, were underestimated compared with scenarios II and III. This indicates that the methodology of scenario I, which has been used in most previous studies, may lead to an overestimation of methane emissions. Additionally, separate DOC(F) and revised MCF were shown to be important parameters for methane emission estimation from landfills, and revised MCF by waste layer played an important role in emission variations. Therefore, more precise information on each landfill and careful determination of parameter values and characteristics of disposed waste in Korea should be used to accurately estimate methane emissions from landfills.
Planck data versus large scale structure: Methods to quantify discordance
NASA Astrophysics Data System (ADS)
Charnock, Tom; Battye, Richard A.; Moss, Adam
2017-06-01
Discordance in the Λ cold dark matter cosmological model can be seen by comparing parameters constrained by cosmic microwave background (CMB) measurements to those inferred by probes of large scale structure. Recent improvements in observations, including final data releases from both Planck and SDSS-III BOSS, as well as improved astrophysical uncertainty analysis of CFHTLenS, allows for an update in the quantification of any tension between large and small scales. This paper is intended, primarily, as a discussion on the quantifications of discordance when comparing the parameter constraints of a model when given two different data sets. We consider Kullback-Leibler divergence, comparison of Bayesian evidences and other statistics which are sensitive to the mean, variance and shape of the distributions. However, as a byproduct, we present an update to the similar analysis in [R. A. Battye, T. Charnock, and A. Moss, Phys. Rev. D 91, 103508 (2015), 10.1103/PhysRevD.91.103508], where we find that, considering new data and treatment of priors, the constraints from the CMB and from a combination of large scale structure (LSS) probes are in greater agreement and any tension only persists to a minor degree. In particular, we find the parameter constraints from the combination of LSS probes which are most discrepant with the Planck 2015 +Pol +BAO parameter distributions can be quantified at a ˜2.55 σ tension using the method introduced in [R. A. Battye, T. Charnock, and A. Moss, Phys. Rev. D 91, 103508 (2015), 10.1103/PhysRevD.91.103508]. If instead we use the distributions constrained by the combination of LSS probes which are in greatest agreement with those from Planck 2015 +Pol +BAO this tension is only 0.76 σ .
Orsatti, Laura; Speziale, Roberto; Orsale, Maria Vittoria; Caretti, Fulvia; Veneziano, Maria; Zini, Matteo; Monteagudo, Edith; Lyons, Kathryn; Beconi, Maria; Chan, Kelvin; Herbst, Todd; Toledo-Sherman, Leticia; Munoz-Sanjuan, Ignacio; Bonelli, Fabio; Dominguez, Celia
2015-03-25
Neuroactive metabolites in the kynurenine pathway of tryptophan catabolism are associated with neurodegenerative disorders. Tryptophan is transported across the blood-brain barrier and converted via the kynurenine pathway to N-formyl-L-kynurenine, which is further degraded to L-kynurenine. This metabolite can then generate a group of metabolites called kynurenines, most of which have neuroactive properties. The association of tryptophan catabolic pathway alterations with various central nervous system (CNS) pathologies has raised interest in analytical methods to accurately quantify kynurenines in body fluids. We here describe a rapid and sensitive reverse-phase HPLC-MS/MS method to quantify L-kynurenine (KYN), kynurenic acid (KYNA), 3-hydroxy-L-kynurenine (3HK) and anthranilic acid (AA) in rat plasma. Our goal was to quantify these metabolites in a single run; given their different physico-chemical properties, major efforts were devoted to develop a chromatography suitable for all metabolites that involves plasma protein precipitation with acetonitrile followed by chromatographic separation by C18 RP chromatography, detected by electrospray mass spectrometry. Quantitation range was 0.098-100 ng/ml for 3HK, 9.8-20,000 ng/ml for KYN, 0.49-1000 ng/ml for KYNA and AA. The method was linear (r>0.9963) and validation parameters were within acceptance range (calibration standards and QC accuracy within ±30%). Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Moslehi, M.; de Barros, F.
2017-12-01
Complexity of hydrogeological systems arises from the multi-scale heterogeneity and insufficient measurements of their underlying parameters such as hydraulic conductivity and porosity. An inadequate characterization of hydrogeological properties can significantly decrease the trustworthiness of numerical models that predict groundwater flow and solute transport. Therefore, a variety of data assimilation methods have been proposed in order to estimate hydrogeological parameters from spatially scarce data by incorporating the governing physical models. In this work, we propose a novel framework for evaluating the performance of these estimation methods. We focus on the Ensemble Kalman Filter (EnKF) approach that is a widely used data assimilation technique. It reconciles multiple sources of measurements to sequentially estimate model parameters such as the hydraulic conductivity. Several methods have been used in the literature to quantify the accuracy of the estimations obtained by EnKF, including Rank Histograms, RMSE and Ensemble Spread. However, these commonly used methods do not regard the spatial information and variability of geological formations. This can cause hydraulic conductivity fields with very different spatial structures to have similar histograms or RMSE. We propose a vision-based approach that can quantify the accuracy of estimations by considering the spatial structure embedded in the estimated fields. Our new approach consists of adapting a new metric, Color Coherent Vectors (CCV), to evaluate the accuracy of estimated fields achieved by EnKF. CCV is a histogram-based technique for comparing images that incorporate spatial information. We represent estimated fields as digital three-channel images and use CCV to compare and quantify the accuracy of estimations. The sensitivity of CCV to spatial information makes it a suitable metric for assessing the performance of spatial data assimilation techniques. Under various factors of data assimilation methods such as number, layout, and type of measurements, we compare the performance of CCV with other metrics such as RMSE. By simulating hydrogeological processes using estimated and true fields, we observe that CCV outperforms other existing evaluation metrics.
NASA Astrophysics Data System (ADS)
Khan, Tanvir R.; Perlinger, Judith A.
2017-10-01
Despite considerable effort to develop mechanistic dry particle deposition parameterizations for atmospheric transport models, current knowledge has been inadequate to propose quantitative measures of the relative performance of available parameterizations. In this study, we evaluated the performance of five dry particle deposition parameterizations developed by Zhang et al. (2001) (Z01), Petroff and Zhang (2010) (PZ10), Kouznetsov and Sofiev (2012) (KS12), Zhang and He (2014) (ZH14), and Zhang and Shao (2014) (ZS14), respectively. The evaluation was performed in three dimensions: model ability to reproduce observed deposition velocities, Vd (accuracy); the influence of imprecision in input parameter values on the modeled Vd (uncertainty); and identification of the most influential parameter(s) (sensitivity). The accuracy of the modeled Vd was evaluated using observations obtained from five land use categories (LUCs): grass, coniferous and deciduous forests, natural water, and ice/snow. To ascertain the uncertainty in modeled Vd, and quantify the influence of imprecision in key model input parameters, a Monte Carlo uncertainty analysis was performed. The Sobol' sensitivity analysis was conducted with the objective to determine the parameter ranking from the most to the least influential. Comparing the normalized mean bias factors (indicators of accuracy), we find that the ZH14 parameterization is the most accurate for all LUCs except for coniferous forest, for which it is second most accurate. From Monte Carlo simulations, the estimated mean normalized uncertainties in the modeled Vd obtained for seven particle sizes (ranging from 0.005 to 2.5 µm) for the five LUCs are 17, 12, 13, 16, and 27 % for the Z01, PZ10, KS12, ZH14, and ZS14 parameterizations, respectively. From the Sobol' sensitivity results, we suggest that the parameter rankings vary by particle size and LUC for a given parameterization. Overall, for dp = 0.001 to 1.0 µm, friction velocity was one of the three most influential parameters in all parameterizations. For giant particles (dp = 10 µm), relative humidity was the most influential parameter. Because it is the least complex of the five parameterizations, and it has the greatest accuracy and least uncertainty, we propose that the ZH14 parameterization is currently superior for incorporation into atmospheric transport models.
NASA Technical Reports Server (NTRS)
Cloutis, E. A.; Lambert, J.; Smith, D. G. W.; Gaffey, M. J.
1987-01-01
High-resolution visible and near-infrared diffuse reflectance spectra of mafic silicates can be deconvolved to yield quantitative information concerning mineral mixture properties, and the results can be directly applied to remotely sensed data. Spectral reflectance measurements of laboratory mixtures of olivine, orthophyroxene, and clinopyroxene with known chemistries, phase abundances, and particle size distributions have been utilized to develop correlations between spectral properties and the physicochemical parameters of the samples. A large number of mafic silicate spectra were measured and examined for systematic variations in spectral properties as a function of chemistry, phase abundance, and particle size. Three classes of spectral parameters (ratioed, absolute, and wavelength) were examined for any correlations. Each class is sensitive to particular mafic silicate properties. Spectral deconvolution techniques have been developed for quantifying, with varying degrees of accuracy, the assemblage properties (chemistry, phase abundance, and particle size).
Reliability Analysis for AFTI-F16 SRFCS Using ASSIST and SURE
NASA Technical Reports Server (NTRS)
Wu, N. Eva
2001-01-01
This paper reports the results of a study on reliability analysis of an AFTI-16 Self-Repairing Flight Control System (SRFCS) using software tools SURE (Semi-Markov Unreliability Range Evaluator and ASSIST (Abstract Semi-Markov Specification Interface to the SURE Tool). The purpose of the study is to investigate the potential utility of the software tools in the ongoing effort of the NASA Aviation Safety Program, where the class of systems must be extended beyond the originally intended serving class of electronic digital processors. The study concludes that SURE and ASSIST are applicable to reliability, analysis of flight control systems. They are especially efficient for sensitivity analysis that quantifies the dependence of system reliability on model parameters. The study also confirms an earlier finding on the dominant role of a parameter called a failure coverage. The paper will remark on issues related to the improvement of coverage and the optimization of redundancy level.
Synchronization of ;light-sensitive; Hindmarsh-Rose neurons
NASA Astrophysics Data System (ADS)
Castanedo-Guerra, Isaac; Steur, Erik; Nijmeijer, Henk
2018-04-01
The suprachiasmatic nucleus is a network of synchronized neurons whose electrical activity follows a 24 h cycle. The synchronization phenomenon (among these neurons) is not completely understood. In this work we study, via experiments and numerical simulations, the phenomenon in which the synchronization threshold changes under the influence of an external (bifurcation) parameter in coupled Hindmarsh-Rose neurons. This parameter ;shapes; the activity of the individual neurons the same way as some neurons in the brain react to light. We corroborate this experimental finding with numerical simulations by quantifying the amount of synchronization using Pearson's correlation coefficient. In order to address the local stability problem of the synchronous state, Floquet theory is applied in the case where the dynamic systems show continuous periodic solutions. These results show how the sufficient coupling strength for synchronization between these neurons is affected by an external cue (e.g. light).
A Backscatter-Lidar Forward-Operator
NASA Astrophysics Data System (ADS)
Geisinger, Armin; Behrendt, Andreas; Wulfmeyer, Volker; Vogel, Bernhard; Mattis, Ina; Flentje, Harald; Förstner, Jochen; Potthast, Roland
2015-04-01
We have developed a forward-operator which is capable of calculating virtual lidar profiles from atmospheric state simulations. The operator allows us to compare lidar measurements and model simulations based on the same measurement parameter: the lidar backscatter profile. This method simplifies qualitative comparisons and also makes quantitative comparisons possible, including statistical error quantification. Implemented into an aerosol-capable model system, the operator will act as a component to assimilate backscatter-lidar measurements. As many weather services maintain already networks of backscatter-lidars, such data are acquired already in an operational manner. To estimate and quantify errors due to missing or uncertain aerosol information, we started sensitivity studies about several scattering parameters such as the aerosol size and both the real and imaginary part of the complex index of refraction. Furthermore, quantitative and statistical comparisons between measurements and virtual measurements are shown in this study, i.e. applying the backscatter-lidar forward-operator on model output.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Upadhyay, Piyush; Rohatgi, Aashish; Stephens, Elizabeth V.
2015-02-18
Al alloy AA7075 sheets were deformed at room temperature at strain-rates exceeding 1000 /s using the electrohydraulic forming (EHF) technique. A method that combines high speed imaging and digital image correlation technique, developed at Pacific Northwest National Laboratory, is used to investigate high strain rate deformation behavior of AA7075. For strain-rate sensitive materials, the ability to accurately model their high-rate deformation behavior is dependent upon the ability to accurately quantify the strain-rate that the material is subjected to. This work investigates the objectivity of software-calculated strain and strain rate by varying different parameters within commonly used commercially available digital imagemore » correlation software. Except for very close to the time of crack opening the calculated strain and strain rates are very consistent and independent of the adjustable parameters of the software.« less
Cost of ownership for inspection equipment
NASA Astrophysics Data System (ADS)
Dance, Daren L.; Bryson, Phil
1993-08-01
Cost of Ownership (CoO) models are increasingly a part of the semiconductor equipment evaluation and selection process. These models enable semiconductor manufacturers and equipment suppliers to quantify a system in terms of dollars per wafer. Because of the complex nature of the semiconductor manufacturing process, there are several key attributes that must be considered in order to accurately reflect the true 'cost of ownership'. While most CoO work to date has been applied to production equipment, the need to understand cost of ownership for inspection and metrology equipment presents unique challenges. Critical parameters such as detection sensitivity as a function of size and type of defect are not included in current CoO models yet are, without question, major factors in the technical evaluation process and life-cycle cost. This paper illustrates the relationship between these parameters, as components of the alpha and beta risk, and cost of ownership.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slosman, D.; Susskind, H.; Bossuyt, A.
1986-03-01
Ventilation imaging can be improved by gating scintigraphic data with the respiratory cycle using temporal Fourier analysis (TFA) to quantify the temporal behavior of the ventilation. Sixteen consecutive images, representing equal-time increments of an average respiratory cycle, were produced by TFA in the posterior view on a pixel-by-pixel basis. An Efficiency Index (EFF), defined as the ratio of the summation of all the differences between maximum and minimum counts for each pixel to that for the entire lung during the respiratory cycle, was derived to describe the pattern of ventilation. The gated ventilation studies were carried out with Xe-127 inmore » 12 subjects: normal lung function (4), small airway disease (2), COPD (5), and restrictive disease (1). EFF for the first three harmonics correlated linearly with FEV1 (r = 0.701, p< 0.01). This approach is suggested as a very sensitive method to quantify the extent and regional distribution of airway obstruction.« less
Pawar, Rajesh; Bromhal, Grant; Carroll, Susan; ...
2014-12-31
Risk assessment for geologic CO₂ storage including quantification of risks is an area of active investigation. The National Risk Assessment Partnership (NRAP) is a US-Department of Energy (US-DOE) effort focused on developing a defensible, science-based methodology and platform for quantifying risk profiles at geologic CO₂ sequestration sites. NRAP has been developing a methodology that centers round development of an integrated assessment model (IAM) using system modeling approach to quantify risks and risk profiles. The IAM has been used to calculate risk profiles with a few key potential impacts due to potential CO₂ and brine leakage. The simulation results are alsomore » used to determine long-term storage security relationships and compare the long-term storage effectiveness to IPCC storage permanence goal. Additionally, we also demonstrate application of IAM for uncertainty quantification in order to determine parameters to which the uncertainty in model results is most sensitive.« less
On the usage of ultrasound computational models for decision making under ambiguity
NASA Astrophysics Data System (ADS)
Dib, Gerges; Sexton, Samuel; Prowant, Matthew; Crawford, Susan; Diaz, Aaron
2018-04-01
Computer modeling and simulation is becoming pervasive within the non-destructive evaluation (NDE) industry as a convenient tool for designing and assessing inspection techniques. This raises a pressing need for developing quantitative techniques for demonstrating the validity and applicability of the computational models. Computational models provide deterministic results based on deterministic and well-defined input, or stochastic results based on inputs defined by probability distributions. However, computational models cannot account for the effects of personnel, procedures, and equipment, resulting in ambiguity about the efficacy of inspections based on guidance from computational models only. In addition, ambiguity arises when model inputs, such as the representation of realistic cracks, cannot be defined deterministically, probabilistically, or by intervals. In this work, Pacific Northwest National Laboratory demonstrates the ability of computational models to represent field measurements under known variabilities, and quantify the differences using maximum amplitude and power spectrum density metrics. Sensitivity studies are also conducted to quantify the effects of different input parameters on the simulation results.
Silvestro, Paolo Cosmo; Pignatti, Stefano; Yang, Hao; Yang, Guijun; Pascucci, Simone; Castaldi, Fabio; Casa, Raffaele
2017-01-01
Process-based models can be usefully employed for the assessment of field and regional-scale impact of drought on crop yields. However, in many instances, especially when they are used at the regional scale, it is necessary to identify the parameters and input variables that most influence the outputs and to assess how their influence varies when climatic and environmental conditions change. In this work, two different crop models, able to represent yield response to water, Aquacrop and SAFYE, were compared, with the aim to quantify their complexity and plasticity through Global Sensitivity Analysis (GSA), using Morris and EFAST (Extended Fourier Amplitude Sensitivity Test) techniques, for moderate to strong water limited climate scenarios. Although the rankings of the sensitivity indices was influenced by the scenarios used, the correlation among the rankings, higher for SAFYE than for Aquacrop, assessed by the top-down correlation coefficient (TDCC), revealed clear patterns. Parameters and input variables related to phenology and to water stress physiological processes were found to be the most influential for Aquacrop. For SAFYE, it was found that the water stress could be inferred indirectly from the processes regulating leaf growth, described in the original SAFY model. SAFYE has a lower complexity and plasticity than Aquacrop, making it more suitable to less data demanding regional scale applications, in case the only objective is the assessment of crop yield and no detailed information is sought on the mechanisms of the stress factors affecting its limitations.
Pignatti, Stefano; Yang, Hao; Yang, Guijun; Pascucci, Simone; Castaldi, Fabio
2017-01-01
Process-based models can be usefully employed for the assessment of field and regional-scale impact of drought on crop yields. However, in many instances, especially when they are used at the regional scale, it is necessary to identify the parameters and input variables that most influence the outputs and to assess how their influence varies when climatic and environmental conditions change. In this work, two different crop models, able to represent yield response to water, Aquacrop and SAFYE, were compared, with the aim to quantify their complexity and plasticity through Global Sensitivity Analysis (GSA), using Morris and EFAST (Extended Fourier Amplitude Sensitivity Test) techniques, for moderate to strong water limited climate scenarios. Although the rankings of the sensitivity indices was influenced by the scenarios used, the correlation among the rankings, higher for SAFYE than for Aquacrop, assessed by the top-down correlation coefficient (TDCC), revealed clear patterns. Parameters and input variables related to phenology and to water stress physiological processes were found to be the most influential for Aquacrop. For SAFYE, it was found that the water stress could be inferred indirectly from the processes regulating leaf growth, described in the original SAFY model. SAFYE has a lower complexity and plasticity than Aquacrop, making it more suitable to less data demanding regional scale applications, in case the only objective is the assessment of crop yield and no detailed information is sought on the mechanisms of the stress factors affecting its limitations. PMID:29107963
The Design and Operation of Ultra-Sensitive and Tunable Radio-Frequency Interferometers.
Cui, Yan; Wang, Pingshan
2014-12-01
Dielectric spectroscopy (DS) is an important technique for scientific and technological investigations in various areas. DS sensitivity and operating frequency ranges are critical for many applications, including lab-on-chip development where sample volumes are small with a wide range of dynamic processes to probe. In this work, we present the design and operation considerations of radio-frequency (RF) interferometers that are based on power-dividers (PDs) and quadrature-hybrids (QHs). Such interferometers are proposed to address the sensitivity and frequency tuning challenges of current DS techniques. Verified algorithms together with mathematical models are presented to quantify material properties from scattering parameters for three common transmission line sensing structures, i.e., coplanar waveguides (CPWs), conductor-backed CPWs, and microstrip lines. A high-sensitivity and stable QH-based interferometer is demonstrated by measuring glucose-water solution at a concentration level that is ten times lower than some recent RF sensors while our sample volume is ~1 nL. Composition analysis of ternary mixture solutions are also demonstrated with a PD-based interferometer. Further work is needed to address issues like system automation, model improvement at high frequencies, and interferometer scaling.
Mathew, Ribu; Sankar, A Ravi
2018-05-01
In this paper, we present the design and optimization of a rectangular piezoresistive composite silicon dioxide nanocantilever sensor. Unlike the conventional design approach, we perform the sensor optimization by not only considering its electro-mechanical response but also incorporating the impact of self-heating induced thermal drift in its terminal characteristics. Through extensive simulations first we comprehend and quantify the inaccuracies due to self-heating effect induced by the geometrical and intrinsic parameters of the piezoresistor. Then, by optimizing the ratio of electrical sensitivity to thermal sensitivity defined as the sensitivity ratio (υ) we improve the sensor performance and measurement reliability. Results show that to ensure υ ≥ 1, shorter and wider piezoresistors are better. In addition, it is observed that unlike the general belief that high doping concentration of piezoresistor reduces thermal sensitivity in piezoresistive sensors, to ensure υ ≥ 1 doping concentration (p) should be in the range: 1E18 cm-3 ≤ p ≤ 1E19 cm-3. Finally, we provide a set of design guidelines that will help NEMS engineers to optimize the performance of such sensors for chemical and biological sensing applications.
Thomas, M E; Klinkenberg, D; Bergwerff, A A; van Eerden, E; Stegeman, J A; Bouma, A
2010-06-01
Salmonella enterica serovar Enteritidis (SE) is an important source of food-related diarrhoea in humans, and table eggs are considered the primordial source of contamination of the human food chain. Using eggs collected at egg-packing stations as samples could be a convenient strategy to detect colonization of layer flocks. The aim of this study was to evaluate egg yolk anti-Salmonella antibody detection using suspension array analysis. An egg yolk panel from contact-infected and non-colonized laying hens was used for the evaluation. Receiver Operating Characteristic (ROC) curves were generated to define a cut-off value and to assess the overall accuracy of the assay. The diagnostic sensitivity and specificity were estimated by maximum likelihood. Sensitivity was quantified on hen level and on sample level, and also quantified as a function of time since colonization. The area under the ROC curve was estimated at 0.984 (se 0.006, P<0.001). Of all colonized contact-infected hens, 67.6% [95% CI: 46.8, 100] developed an antibody response, which was detectable 17.4 days [14.3, 26.9] after colonization. In total, 98% [95.4, 99.4] of the 'immunopositive' hens had test positive eggs. The overall sensitivity of the immunological test was 66.7% [45.9, 98.7] and the specificity was 98.5% [97.8, 99.1]. This study provided essential parameters for optimizing surveillance programs based on detection of antibodies, and indicates that immunology based on examination of egg yolk gives important information about the Salmonella status of the flock. (c) 2010 Elsevier B.V. All rights reserved.
Effects of climate change on aerosol concentrations in Europe
NASA Astrophysics Data System (ADS)
Megaritis, Athanasios G.; Fountoukis, Christos; Pandis, Spyros N.
2013-04-01
High concentrations of particulate matter less than 2.5 μm in size (PM2.5), ozone and other major constituents of air pollution, have adverse effects on human health, visibility and ecosystems (Seinfeld and Pandis, 2006), and are strongly influenced by meteorology. Emissions control policy is currently made assuming that climate will remain constant in the future. However, climate change over the next decades is expected to be significant (IPCC, 2007) and may impact local and regional air quality. Determining the sensitivity of the concentrations of air pollutants to climate change is an important step toward estimating future air quality. In this study we applied PMCAMx (Fountoukis et al., 2011), a three dimensional chemical transport model, over Europe, in order to quantify the individual effects of various meteorological parameters on fine particulate matter (PM2.5) concentrations. A suite of perturbations in various meteorological factors, such as temperature, wind speed, absolute humidity and precipitation were imposed separately on base case conditions to determine the sensitivities of PM2.5 concentrations and composition to these parameters. Different simulation periods (summer, autumn 2008 and winter 2009) are used to examine also the seasonal dependence of the air quality - climate interactions. The results of these sensitivity simulations suggest that there is an important link between changes in meteorology and PM2.5 levels. We quantify through separate sensitivity simulations the processes which are mainly responsible for the final predicted changes in PM2.5 concentration and composition. The predicted PM2.5 response to those meteorology perturbations was found to be quite variable in space and time. These results suggest that, the changes in concentrations caused by changes in climate should be taken into account in long-term air quality planning. References Fountoukis C., Racherla P. N., Denier van der Gon H. A. C., Polymeneas P., Charalampidis P. E., Pilinis C., Wiedensohler A., Dall'Osto M., O'Dowd C., and S. N. Pandis: Evaluation of a three-dimensional chemical transport model (PMCAMx) in the European domain during the EUCAARI May 2008 campaign, Atmos. Chem. Phys., 11, 10331-10347, 2011. Intergovernmental Panel on Climate Change (IPCC), Fourth Assessment Report: Summary for Policymakers, 2007. Seinfeld, J. H., and Pandis, S. N.: Atmospheric chemistry and physics: From air pollution to climate change, 2nd ed.; John Wiley and Sons, Hoboken, NJ, 2006.
NASA Astrophysics Data System (ADS)
Garavaglia, F.; Seyve, E.; Gottardi, F.; Le Lay, M.; Gailhard, J.; Garçon, R.
2014-12-01
MORDOR is a conceptual hydrological model extensively used in Électricité de France (EDF, French electric utility company) operational applications: (i) hydrological forecasting, (ii) flood risk assessment, (iii) water balance and (iv) climate change studies. MORDOR is a lumped, reservoir, elevation based model with hourly or daily areal rainfall and air temperature as the driving input data. The principal hydrological processes represented are evapotranspiration, direct and indirect runoff, ground water, snow accumulation and melt and routing. The model has been intensively used at EDF for more than 20 years, in particular for modeling French mountainous watersheds. In the matter of parameters calibration we propose and test alternative multi-criteria techniques based on two specific approaches: automatic calibration using single-objective functions and a priori parameter calibration founded on hydrological watershed features. The automatic calibration approach uses single-objective functions, based on Kling-Gupta efficiency, to quantify the good agreement between the simulated and observed runoff focusing on four different runoff samples: (i) time-series sample, (I) annual hydrological regime, (iii) monthly cumulative distribution functions and (iv) recession sequences.The primary purpose of this study is to analyze the definition and sensitivity of MORDOR parameters testing different calibration techniques in order to: (i) simplify the model structure, (ii) increase the calibration-validation performance of the model and (iii) reduce the equifinality problem of calibration process. We propose an alternative calibration strategy that reaches these goals. The analysis is illustrated by calibrating MORDOR model to daily data for 50 watersheds located in French mountainous regions.
Yu, Manzhu; Yang, Chaowei
2016-01-01
Dust storms are devastating natural disasters that cost billions of dollars and many human lives every year. Using the Non-Hydrostatic Mesoscale Dust Model (NMM-dust), this research studies how different spatiotemporal resolutions of two input parameters (soil moisture and greenness vegetation fraction) impact the sensitivity and accuracy of a dust model. Experiments are conducted by simulating dust concentration during July 1-7, 2014, for the target area covering part of Arizona and California (31, 37, -118, -112), with a resolution of ~ 3 km. Using ground-based and satellite observations, this research validates the temporal evolution and spatial distribution of dust storm output from the NMM-dust, and quantifies model error using measurements of four evaluation metrics (mean bias error, root mean square error, correlation coefficient and fractional gross error). Results showed that the default configuration of NMM-dust (with a low spatiotemporal resolution of both input parameters) generates an overestimation of Aerosol Optical Depth (AOD). Although it is able to qualitatively reproduce the temporal trend of the dust event, the default configuration of NMM-dust cannot fully capture its actual spatial distribution. Adjusting the spatiotemporal resolution of soil moisture and vegetation cover datasets showed that the model is sensitive to both parameters. Increasing the spatiotemporal resolution of soil moisture effectively reduces model's overestimation of AOD, while increasing the spatiotemporal resolution of vegetation cover changes the spatial distribution of reproduced dust storm. The adjustment of both parameters enables NMM-dust to capture the spatial distribution of dust storms, as well as reproducing more accurate dust concentration.
Bignardi, Chiara; Cavazza, Antonella; Laganà, Carmen; Salvadeo, Paola; Corradini, Claudio
2018-01-01
The interest towards "substances of emerging concerns" referred to objects intended to come into contact with food is recently growing. Such substances can be found in traces in simulants and in food products put in contact with plastic materials. In this context, it is important to set up analytical systems characterized by high sensitivity and to improve detection parameters to enhance signals. This work was aimed at optimizing a method based on UHPLC coupled to high resolution mass spectrometry to quantify the most common plastic additives, and able to detect the presence of polymers degradation products and coloring agents migrating from plastic re-usable containers. The optimization of mass spectrometric parameter settings for quantitative analysis of additives has been achieved by a chemometric approach, using a full factorial and d-optimal experimental designs, allowing to evaluate possible interactions between the investigated parameters. Results showed that the optimized method was characterized by improved features in terms of sensitivity respect to existing methods and was successfully applied to the analysis of a complex model food system such as chocolate put in contact with 14 polycarbonate tableware samples. A new procedure for sample pre-treatment was carried out and validated, showing high reliability. Results reported, for the first time, the presence of several molecules migrating to chocolate, in particular belonging to plastic additives, such Cyasorb UV5411, Tinuvin 234, Uvitex OB, and oligomers, whose amount was found to be correlated to age and degree of damage of the containers. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Dethlefsen, Frank; Tilmann Pfeiffer, Wolf; Schäfer, Dirk
2016-04-01
Numerical simulations of hydraulic, thermal, geomechanical, or geochemical (THMC-) processes in the subsurface have been conducted for decades. Often, such simulations are commenced by applying a parameter set that is as realistic as possible. Then, a base scenario is calibrated on field observations. Finally, scenario simulations can be performed, for instance to forecast the system behavior after varying input data. In the context of subsurface energy and mass storage, however, these model calibrations based on field data are often not available, as these storage actions have not been carried out so far. Consequently, the numerical models merely rely on the parameter set initially selected, and uncertainties as a consequence of a lack of parameter values or process understanding may not be perceivable, not mentioning quantifiable. Therefore, conducting THMC simulations in the context of energy and mass storage deserves a particular review of the model parameterization with its input data, and such a review so far hardly exists to the required extent. Variability or aleatory uncertainty exists for geoscientific parameter values in general, and parameters for that numerous data points are available, such as aquifer permeabilities, may be described statistically thereby exhibiting statistical uncertainty. In this case, sensitivity analyses for quantifying the uncertainty in the simulation resulting from varying this parameter can be conducted. There are other parameters, where the lack of data quantity and quality implies a fundamental changing of ongoing processes when such a parameter value is varied in numerical scenario simulations. As an example for such a scenario uncertainty, varying the capillary entry pressure as one of the multiphase flow parameters can either allow or completely inhibit the penetration of an aquitard by gas. As the last example, the uncertainty of cap-rock fault permeabilities and consequently potential leakage rates of stored gases into shallow compartments are regarded as recognized ignorance by the authors of this study, as no realistic approach exists to determine this parameter and values are best guesses only. In addition to these aleatory uncertainties, an equivalent classification is possible for rating epistemic uncertainties describing the degree of understanding processes such as the geochemical and hydraulic effects following potential gas intrusions from deeper reservoirs into shallow aquifers. As an outcome of this grouping of uncertainties, prediction errors of scenario simulations can be calculated by sensitivity analyses, if the uncertainties are identified as statistical. However, if scenario uncertainties exist or even recognized ignorance has to be attested to a parameter or a process in question, the outcomes of simulations mainly depend on the decision of the modeler by choosing parameter values or by interpreting the occurring of processes. In that case, the informative value of numerical simulations is limited by ambiguous simulation results, which cannot be refined without improving the geoscientific database through laboratory or field studies on a longer term basis, so that the effects of the subsurface use may be predicted realistically. This discussion, amended by a compilation of available geoscientific data to parameterize such simulations, will be presented in this study.
Exploring cosmic origins with CORE: Cosmological parameters
NASA Astrophysics Data System (ADS)
Di Valentino, E.; Brinckmann, T.; Gerbino, M.; Poulin, V.; Bouchet, F. R.; Lesgourgues, J.; Melchiorri, A.; Chluba, J.; Clesse, S.; Delabrouille, J.; Dvorkin, C.; Forastieri, F.; Galli, S.; Hooper, D. C.; Lattanzi, M.; Martins, C. J. A. P.; Salvati, L.; Cabass, G.; Caputo, A.; Giusarma, E.; Hivon, E.; Natoli, P.; Pagano, L.; Paradiso, S.; Rubiño-Martin, J. A.; Achúcarro, A.; Ade, P.; Allison, R.; Arroja, F.; Ashdown, M.; Ballardini, M.; Banday, A. J.; Banerji, R.; Bartolo, N.; Bartlett, J. G.; Basak, S.; Baumann, D.; de Bernardis, P.; Bersanelli, M.; Bonaldi, A.; Bonato, M.; Borrill, J.; Boulanger, F.; Bucher, M.; Burigana, C.; Buzzelli, A.; Cai, Z.-Y.; Calvo, M.; Carvalho, C. S.; Castellano, G.; Challinor, A.; Charles, I.; Colantoni, I.; Coppolecchia, A.; Crook, M.; D'Alessandro, G.; De Petris, M.; De Zotti, G.; Diego, J. M.; Errard, J.; Feeney, S.; Fernandez-Cobos, R.; Ferraro, S.; Finelli, F.; de Gasperis, G.; Génova-Santos, R. T.; González-Nuevo, J.; Grandis, S.; Greenslade, J.; Hagstotz, S.; Hanany, S.; Handley, W.; Hazra, D. K.; Hernández-Monteagudo, C.; Hervias-Caimapo, C.; Hills, M.; Kiiveri, K.; Kisner, T.; Kitching, T.; Kunz, M.; Kurki-Suonio, H.; Lamagna, L.; Lasenby, A.; Lewis, A.; Liguori, M.; Lindholm, V.; Lopez-Caniego, M.; Luzzi, G.; Maffei, B.; Martin, S.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; McCarthy, D.; Melin, J.-B.; Mohr, J. J.; Molinari, D.; Monfardini, A.; Negrello, M.; Notari, A.; Paiella, A.; Paoletti, D.; Patanchon, G.; Piacentini, F.; Piat, M.; Pisano, G.; Polastri, L.; Polenta, G.; Pollo, A.; Quartin, M.; Remazeilles, M.; Roman, M.; Ringeval, C.; Tartari, A.; Tomasi, M.; Tramonte, D.; Trappe, N.; Trombetti, T.; Tucker, C.; Väliviita, J.; van de Weygaert, R.; Van Tent, B.; Vennin, V.; Vermeulen, G.; Vielva, P.; Vittorio, N.; Young, K.; Zannoni, M.
2018-04-01
We forecast the main cosmological parameter constraints achievable with the CORE space mission which is dedicated to mapping the polarisation of the Cosmic Microwave Background (CMB). CORE was recently submitted in response to ESA's fifth call for medium-sized mission proposals (M5). Here we report the results from our pre-submission study of the impact of various instrumental options, in particular the telescope size and sensitivity level, and review the great, transformative potential of the mission as proposed. Specifically, we assess the impact on a broad range of fundamental parameters of our Universe as a function of the expected CMB characteristics, with other papers in the series focusing on controlling astrophysical and instrumental residual systematics. In this paper, we assume that only a few central CORE frequency channels are usable for our purpose, all others being devoted to the cleaning of astrophysical contaminants. On the theoretical side, we assume ΛCDM as our general framework and quantify the improvement provided by CORE over the current constraints from the Planck 2015 release. We also study the joint sensitivity of CORE and of future Baryon Acoustic Oscillation and Large Scale Structure experiments like DESI and Euclid. Specific constraints on the physics of inflation are presented in another paper of the series. In addition to the six parameters of the base ΛCDM, which describe the matter content of a spatially flat universe with adiabatic and scalar primordial fluctuations from inflation, we derive the precision achievable on parameters like those describing curvature, neutrino physics, extra light relics, primordial helium abundance, dark matter annihilation, recombination physics, variation of fundamental constants, dark energy, modified gravity, reionization and cosmic birefringence. In addition to assessing the improvement on the precision of individual parameters, we also forecast the post-CORE overall reduction of the allowed parameter space with figures of merit for various models increasing by as much as ~ 107 as compared to Planck 2015, and 105 with respect to Planck 2015 + future BAO measurements.
A GC-MS method for the detection and quantitation of ten major drugs of abuse in human hair samples.
Orfanidis, A; Mastrogianni, O; Koukou, A; Psarros, G; Gika, H; Theodoridis, G; Raikos, N
2017-03-15
A sensitive analytical method has been developed in order to identify and quantify major drugs of abuse (DOA), namely morphine, codeine, 6-monoacetylmorphine, cocaine, ecgonine methyl ester, benzoylecgonine, amphetamine, methamphetamine, methylenedioxymethamphetamine and methylenedioxyamphetamine in human hair. Samples of hair were extracted with methanol under ultrasonication at 50°C after a three step rinsing process to remove external contamination and dirt hair. Derivatization with BSTFA was selected in order to increase detection sensitivity of GC/MS analysis. Optimization of derivatization parameters was based on experiments for the selection of derivatization time, temperature and volume of derivatising agent. Validation of the method included evaluation of linearity which ranged from 2 to 350ng/mg of hair mean concentration for all DOA, evaluation of sensitivity, accuracy, precision and repeatability. Limits of detection ranged from 0.05 to 0.46ng/mg of hair. The developed method was applied for the analysis of hair samples obtained from three human subjects and were found positive in cocaine, and opiates. Published by Elsevier B.V.
Huijbregts, Mark A J; Gilijamse, Wim; Ragas, Ad M J; Reijnders, Lucas
2003-06-01
The evaluation of uncertainty is relatively new in environmental life-cycle assessment (LCA). It provides useful information to assess the reliability of LCA-based decisions and to guide future research toward reducing uncertainty. Most uncertainty studies in LCA quantify only one type of uncertainty, i.e., uncertainty due to input data (parameter uncertainty). However, LCA outcomes can also be uncertain due to normative choices (scenario uncertainty) and the mathematical models involved (model uncertainty). The present paper outlines a new methodology that quantifies parameter, scenario, and model uncertainty simultaneously in environmental life-cycle assessment. The procedure is illustrated in a case study that compares two insulation options for a Dutch one-family dwelling. Parameter uncertainty was quantified by means of Monte Carlo simulation. Scenario and model uncertainty were quantified by resampling different decision scenarios and model formulations, respectively. Although scenario and model uncertainty were not quantified comprehensively, the results indicate that both types of uncertainty influence the case study outcomes. This stresses the importance of quantifying parameter, scenario, and model uncertainty simultaneously. The two insulation options studied were found to have significantly different impact scores for global warming, stratospheric ozone depletion, and eutrophication. The thickest insulation option has the lowest impact on global warming and eutrophication, and the highest impact on stratospheric ozone depletion.
NASA Astrophysics Data System (ADS)
Dufoyer, A.; Lecoq, N.; Massei, N.; Marechal, J. C.
2017-12-01
Physics-based modeling of karst systems remains almost impossible without enough accurate information about the inner physical characteristics. Usually, the only available hydrodynamic information is the flow rate at the karst outlet. Numerous works in the past decades have used and proven the usefulness of time-series analysis and spectral techniques applied to spring flow, precipitations or even physico-chemical parameters, for interpreting karst hydrological functioning. However, identifying or interpreting the karst systems physical features that control statistical or spectral characteristics of spring flow variations is still challenging, not to say sometimes controversial. The main objective of this work is to determine how the statistical and spectral characteristics of the hydrodynamic signal at karst springs can be related to inner physical and hydraulic properties. In order to address this issue, we undertake an empirical approach based on the use of both distributed and physics-based models, and on synthetic systems responses. The first step of the research is to conduct a sensitivity analysis of time-series/spectral methods to karst hydraulic and physical properties. For this purpose, forward modeling of flow through several simple, constrained and synthetic cases in response to precipitations is undertaken. It allows us to quantify how the statistical and spectral characteristics of flow at the outlet are sensitive to changes (i) in conduit geometries, and (ii) in hydraulic parameters of the system (matrix/conduit exchange rate, matrix hydraulic conductivity and storativity). The flow differential equations resolved by MARTHE, a computer code developed by the BRGM, allows karst conduits modeling. From signal processing on simulated spring responses, we hope to determine if specific frequencies are always modified, thanks to Fourier series and multi-resolution analysis. We also hope to quantify which parameters are the most variable with auto-correlation analysis: first results seem to show higher variations due to conduit conductivity than the ones due to matrix/conduit exchange rate. Future steps will be using another computer code, based on double-continuum approach and allowing turbulent conduit flow, and modeling a natural system.
Covariance specification and estimation to improve top-down Green House Gas emission estimates
NASA Astrophysics Data System (ADS)
Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Whetstone, J. R.
2015-12-01
The National Institute of Standards and Technology (NIST) operates the North-East Corridor (NEC) project and the Indianapolis Flux Experiment (INFLUX) in order to develop measurement methods to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties in urban domains using a top down inversion method. Top down inversion updates prior knowledge using observations in a Bayesian way. One primary consideration in a Bayesian inversion framework is the covariance structure of (1) the emission prior residuals and (2) the observation residuals (i.e. the difference between observations and model predicted observations). These covariance matrices are respectively referred to as the prior covariance matrix and the model-data mismatch covariance matrix. It is known that the choice of these covariances can have large effect on estimates. The main objective of this work is to determine the impact of different covariance models on inversion estimates and their associated uncertainties in urban domains. We use a pseudo-data Bayesian inversion framework using footprints (i.e. sensitivities of tower measurements of GHGs to surface emissions) and emission priors (based on Hestia project to quantify fossil-fuel emissions) to estimate posterior emissions using different covariance schemes. The posterior emission estimates and uncertainties are compared to the hypothetical truth. We find that, if we correctly specify spatial variability and spatio-temporal variability in prior and model-data mismatch covariances respectively, then we can compute more accurate posterior estimates. We discuss few covariance models to introduce space-time interacting mismatches along with estimation of the involved parameters. We then compare several candidate prior spatial covariance models from the Matern covariance class and estimate their parameters with specified mismatches. We find that best-fitted prior covariances are not always best in recovering the truth. To achieve accuracy, we perform a sensitivity study to further tune covariance parameters. Finally, we introduce a shrinkage based sample covariance estimation technique for both prior and mismatch covariances. This technique allows us to achieve similar accuracy nonparametrically in a more efficient and automated way.
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
Optimization of enzyme parameters for fermentative production of biorenewable fuels and chemicals
Jarboe, Laura R.; Liu, Ping; Kautharapu, Kumar Babu; Ingram, Lonnie O.
2012-01-01
Microbial biocatalysts such as Escherichia coli and Saccharomyces cerevisiae have been extensively subjected to Metabolic Engineering for the fermentative production of biorenewable fuels and chemicals. This often entails the introduction of new enzymes, deletion of unwanted enzymes and efforts to fine-tune enzyme abundance in order to attain the desired strain performance. Enzyme performance can be quantitatively described in terms of the Michaelis-Menten type parameters Km, turnover number kcat and Ki, which roughly describe the affinity of an enzyme for its substrate, the speed of a reaction and the enzyme sensitivity to inhibition by regulatory molecules. Here we describe examples of where knowledge of these parameters have been used to select, evolve or engineer enzymes for the desired performance and enabled increased production of biorenewable fuels and chemicals. Examples include production of ethanol, isobutanol, 1-butanol and tyrosine and furfural tolerance. The Michaelis-Menten parameters can also be used to judge the cofactor dependence of enzymes and quantify their preference for NADH or NADPH. Similarly, enzymes can be selected, evolved or engineered for the preferred cofactor preference. Examples of exporter engineering and selection are also discussed in the context of production of malate, valine and limonene. PMID:24688665
NASA Astrophysics Data System (ADS)
Taverniers, Søren; Tartakovsky, Daniel M.
2017-11-01
Predictions of the total energy deposited into a brain tumor through X-ray irradiation are notoriously error-prone. We investigate how this predictive uncertainty is affected by uncertainty in both the location of the region occupied by a dose-enhancing iodinated contrast agent and the agent's concentration. This is done within the probabilistic framework in which these uncertain parameters are modeled as random variables. We employ the stochastic collocation (SC) method to estimate statistical moments of the deposited energy in terms of statistical moments of the random inputs, and the global sensitivity analysis (GSA) to quantify the relative importance of uncertainty in these parameters on the overall predictive uncertainty. A nonlinear radiation-diffusion equation dramatically magnifies the coefficient of variation of the uncertain parameters, yielding a large coefficient of variation for the predicted energy deposition. This demonstrates that accurate prediction of the energy deposition requires a proper treatment of even small parametric uncertainty. Our analysis also reveals that SC outperforms standard Monte Carlo, but its relative efficiency decreases as the number of uncertain parameters increases from one to three. A robust GSA ameliorates this problem by reducing this number.
The significance of parameter uncertainties for the prediction of offshore pile driving noise.
Lippert, Tristan; von Estorff, Otto
2014-11-01
Due to the construction of offshore wind farms and its potential effect on marine wildlife, the numerical prediction of pile driving noise over long ranges has recently gained importance. In this contribution, a coupled finite element/wavenumber integration model for noise prediction is presented and validated by measurements. The ocean environment, especially the sea bottom, can only be characterized with limited accuracy in terms of input parameters for the numerical model at hand. Therefore the effect of these parameter uncertainties on the prediction of sound pressure levels (SPLs) in the water column is investigated by a probabilistic approach. In fact, a variation of the bottom material parameters by means of Monte-Carlo simulations shows significant effects on the predicted SPLs. A sensitivity analysis of the model with respect to the single quantities is performed, as well as a global variation. Based on the latter, the probability distribution of the SPLs at an exemplary receiver position is evaluated and compared to measurements. The aim of this procedure is to develop a model to reliably predict an interval for the SPLs, by quantifying the degree of uncertainty of the SPLs with the MC simulations.
Effects of snowmelt on watershed transit time distributions
NASA Astrophysics Data System (ADS)
Fang, Z.; Carroll, R. W. H.; Harman, C. J.; Wilusz, D. C.; Schumer, R.
2017-12-01
Snowmelt is the principal control of the timing and magnitude of water flow through alpine watersheds, but the streamflow generated may be displaced groundwater. To quantify this effect, we use a rank StorAge Selection (rSAS) model to estimate time-dependent travel time distributions (TTDs) for the East River Catchment (ERC, 84 km2) - a headwater basin of the Colorado River, and newly designated as the Lawrence Berkeley National Laboratory's Watershed Function Science Focus Area (SFA). Through the SFA, observational networks related to precipitation and stream fluxes have been established with a focus on environmental tracers and stable isotopes. The United Stated Geological Survey Precipitation Runoff Modeling System (PRMS) was used to estimate spatially- and temporally-variable boundary fluxes of effective precipitation (snowmelt & rain), evapotranspiration, and subsurface storage. The DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm was used to calibrate the rSAS model to observed stream isotopic concentration data and quantify uncertainty. The sensitivity of the simulated TTDs to systematic changes in the boundary fluxes was explored. Different PRMS and rSAS model parameters setup were tested to explore how they affect the relationship between input precipitation, especially snowmelt, and the estimated TTDs. Wavelet Coherence Analysis (WCA) was applied to investigate the seasonality of TTD simulations. Our ultimate goal is insight into how the Colorado River headwater catchments store and route water, and how sensitive flow paths and transit times are to climatic changes.
NASA Astrophysics Data System (ADS)
Desbree, A.; Pain, F.; Gurden, H.; Pinot, L.; Grenier, D.; Zimmer, L.; Mastrippolito, R.; Laniece, P.
2005-10-01
Elucidating complex physiological mechanisms in small animal in vivo requires the development of new investigatory techniques including imaging with multiple modalities. Combining exploratory techniques has the tremendous advantage to record simultaneously complementary parameters on the same animal. In this field, an exciting challenge remains in the combination of nuclear magnetic resonance (NMR) and positron emission tomography (PET) since small animals studies are limited by strict technical constraints in vivo. Coupling NMR with a radiosensitive /spl beta/-MicroProbe offers therefore an interesting technical alternative. To assess the feasibility of this new dual-modality system, we designed theoretical and experimental approaches to test the ability of the /spl beta/-Microprobe to quantify radioactivity concentration in an intense magnetic field. In an initial step, simulations were carried out using Geant4. First, we evaluated the influence of a magnetic field on the probe detection volume. Then, the detection sensitivity and energy response of the probe were quantified. In a second step, experiments were run within a 7-T magnet to confirm our simulations results. We showed that using the probe in magnetic fields leads to a slight attenuation in sensitivity and an increase of the scintillation light yield. These data demonstrate the feasibility of combining NMR to the /spl beta/-MicroProbe.
Baker, Ronald J.; Reilly, Timothy J.; Lopez, Anthony R.; Romanok, Kristin M.; Wengrowski, Edward W
2015-01-01
A screening tool for quantifying levels of concern for contaminants detected in monitoring wells on or near landfills to down-gradient receptors (streams, wetlands and residential lots) was developed and evaluated. The tool uses Quick Domenico Multi-scenario (QDM), a spreadsheet implementation of Domenico-based solute transport, to estimate concentrations of contaminants reaching receptors under steady-state conditions from a constant-strength source. Unlike most other available Domenico-based model applications, QDM calculates the time for down-gradient contaminant concentrations to approach steady state and appropriate dispersivity values, and allows for up to fifty simulations on a single spreadsheet. Sensitivity of QDM solutions to critical model parameters was quantified. The screening tool uses QDM results to categorize landfills as having high, moderate and low levels of concern, based on contaminant concentrations reaching receptors relative to regulatory concentrations. The application of this tool was demonstrated by assessing levels of concern (as defined by the New Jersey Pinelands Commission) for thirty closed, uncapped landfills in the New Jersey Pinelands National Reserve, using historic water-quality data from monitoring wells on and near landfills and hydraulic parameters from regional flow models. Twelve of these landfills are categorized as having high levels of concern, indicating a need for further assessment. This tool is not a replacement for conventional numerically-based transport model or other available Domenico-based applications, but is suitable for quickly assessing the level of concern posed by a landfill or other contaminant point source before expensive and lengthy monitoring or remediation measures are taken. In addition to quantifying the level of concern using historic groundwater-monitoring data, the tool allows for archiving model scenarios and adding refinements as new data become available.
Swem, Lee R.; Swem, Danielle L.; Wingreen, Ned S.; Bassler, Bonnie L.
2008-01-01
Summary Quorum sensing, a process of bacterial cell-cell communication, relies on production, detection, and response to autoinducer signaling molecules. Here we focus on LuxN, a nine transmembrane domain protein from Vibrio harveyi, and the founding example of membrane-bound receptors for acyl-homoserine lactone (AHL) autoinducers. Previously, nothing was known about signal recognition by membrane-bound AHL receptors. We used mutagenesis and suppressor analyses to identify the AHL-binding domain of LuxN, and discovered LuxN mutants that confer decreased and increased AHL sensitivity. Our analysis of dose-response curves of multiple LuxN mutants pins these inverse phenotypes on quantifiable opposing shifts in the free-energy bias of LuxN for its kinase and phosphatase states. To extract signaling parameters, we exploited a strong LuxN antagonist, one of fifteen small-molecule antagonists we identified. We find that quorum-sensing-mediated communication can be manipulated positively and negatively to control bacterial behavior, and that signaling parameters can be deduced from in vivo data. PMID:18692469
Experimental determination of the correlation properties of plasma turbulence using 2D BES systems
NASA Astrophysics Data System (ADS)
Fox, M. F. J.; Field, A. R.; van Wyk, F.; Ghim, Y.-c.; Schekochihin, A. A.; the MAST Team
2017-04-01
A procedure is presented to map from the spatial correlation parameters of a turbulent density field (the radial and binormal correlation lengths and wavenumbers, and the fluctuation amplitude) to correlation parameters that would be measured by a beam emission spectroscopy (BES) diagnostic. The inverse mapping is also derived, which results in resolution criteria for recovering correct correlation parameters, depending on the spatial response of the instrument quantified in terms of point-spread functions (PSFs). Thus, a procedure is presented that allows for a systematic comparison between theoretical predictions and experimental observations. This procedure is illustrated using the Mega-Ampere Spherical Tokamak BES system and the validity of the underlying assumptions is tested on fluctuating density fields generated by direct numerical simulations using the gyrokinetic code GS2. The measurement of the correlation time, by means of the cross-correlation time-delay method, is also investigated and is shown to be sensitive to the fluctuating radial component of velocity, as well as to small variations in the spatial properties of the PSFs.
NASA Technical Reports Server (NTRS)
Whiteman, D.N.; Veselovskii, I.; Kolgotin, A.; Korenskii, M.; Andrews, E.
2008-01-01
The feasibility of using a multi-wavelength Mie-Raman lidar based on a tripled Nd:YAG laser for profiling aerosol physical parameters in the planetary boundary layer (PBL) under varying conditions of relative humidity (RH) is studied. The lidar quantifies three aerosol backscattering and two extinction coefficients and from these optical data the particle parameters such as concentration, size and complex refractive index are retrieved through inversion with regularization. The column-integrated, lidar-derived parameters are compared with results from the AERONET sun photometer. The lidar and sun photometer agree well in the characterization of the fine mode parameters, however the lidar shows less sensitivity to coarse mode. The lidar results reveal a strong dependence of particle properties on RH. The height regions with enhanced RH are characterized by an increase of backscattering and extinction coefficient and a decrease in the Angstrom exponent coinciding with an increase in the particle size. We present data selection techniques useful for selecting cases that can support the calculation of hygroscopic growth parameters using lidar. Hygroscopic growth factors calculated using these techniques agree with expectations despite the lack of co-located radiosonde data. Despite this limitation, the results demonstrate the potential of multi-wavelength Raman lidar technique for study of aerosol humidification process.
Solon, Kimberly; Flores-Alsina, Xavier; Gernaey, Krist V; Jeppsson, Ulf
2015-01-01
This paper examines the importance of influent fractionation, kinetic, stoichiometric and mass transfer parameter uncertainties when modeling biogas production in wastewater treatment plants. The anaerobic digestion model no. 1 implemented in the plant-wide context provided by the benchmark simulation model no. 2 is used to quantify the generation of CH₄, H₂and CO₂. A comprehensive global sensitivity analysis based on (i) standardized regression coefficients (SRC) and (ii) Morris' screening's (MS's) elementary effects reveals the set of parameters that influence the biogas production uncertainty the most. This analysis is repeated for (i) different temperature regimes and (ii) different solids retention times (SRTs) in the anaerobic digester. Results show that both SRC and MS are good measures of sensitivity unless the anaerobic digester is operating at low SRT and mesophilic conditions. In the latter situation, and due to the intrinsic nonlinearities of the system, SRC fails in decomposing the variance of the model predictions (R² < 0.7) making MS a more reliable method. At high SRT, influent fractionations are the most influential parameters for predictions of CH₄and CO₂emissions. Nevertheless, when the anaerobic digester volume is decreased (for the same load), the role of acetate degraders gains more importance under mesophilic conditions, while lipids and fatty acid metabolism is more influential under thermophilic conditions. The paper ends with a critical discussion of the results and their implications during model calibration and validation exercises.
NASA Astrophysics Data System (ADS)
Roten, D.; Hogue, S.; Spell, P.; Marland, E.; Marland, G.
2017-12-01
There is an increasing role for high resolution, CO2 emissions inventories across multiple arenas. The breadth of the applicability of high-resolution data is apparent from their use in atmospheric CO2 modeling, their potential for validation of space-based atmospheric CO2 remote-sensing, and the development of climate change policy. This work focuses on increasing our understanding of the uncertainty in these inventories and the implications on their downstream use. The industrial point sources of emissions (power generating stations, cement manufacturing plants, paper mills, etc.) used in the creation of these inventories often have robust emissions characteristics, beyond just their geographic location. Physical parameters of the emission sources such as number of exhaust stacks, stack heights, stack diameters, exhaust temperatures, and exhaust velocities, as well as temporal variability and climatic influences can be important in characterizing emissions. Emissions from large point sources can behave much differently than emissions from areal sources such as automobiles. For many applications geographic location is not an adequate characterization of emissions. This work demonstrates the sensitivities of atmospheric models to the physical parameters of large point sources and provides a methodology for quantifying parameter impacts at multiple locations across the United States. The sensitivities highlight the importance of location and timing and help to highlight potential aspects that can guide efforts to reduce uncertainty in emissions inventories and increase the utility of the models.
He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong
2016-02-01
Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.
Zhang, Gu-Mu-Yang; Shi, Bing; Sun, Hao; Jin, Zheng-Yu; Xue, Hua-Dan
2017-09-01
To investigate the feasibility of using CT texture analysis (CTTA) to differentiate pheochromocytoma from lipid-poor adrenocortical adenoma (lp-ACA). Ninety-eight pheochromocytomas and 66 lp-ACAs were included in this retrospective study. CTTA was performed on unenhanced and enhanced images. Receiver operating characteristic (ROC) analysis was performed, and the area under the ROC curve (AUC) was calculated for texture parameters that were significantly different for the objective. Diagnostic accuracies were evaluated using the cutoff values of texture parameters with the highest AUCs. Compared to lp-ACAs, pheochromocytomas had significantly higher mean gray-level intensity (Mean), entropy, and mean of positive pixels (MPP), but lower skewness and kurtosis on unenhanced images (P < 0.001). On enhanced images, these texture-quantifiers followed a similar trend where Mean, entropy, and MPP were higher, but skewness and kurtosis were lower in pheochromocytomas. Standard deviation (SD) was also significantly higher in pheochromocytomas on enhanced images. Mean and MPP quantified from no filtration on unenhanced CT images yielded the highest AUC of 0.86 ± 0.03 (95% CI 0.81-0.91) at a cutoff value of 34.0 for Mean and MPP, respectively (sensitivity = 79.6%, specificity = 83.3%, accuracy = 81.1%). It was feasible to use CTTA to differentiate pheochromocytoma from lp-ACA.
Zhou, Bin; Zhao, Bin
2014-01-01
It is difficult to evaluate and compare interventions for reducing exposure to air pollutants, including polycyclic aromatic hydrocarbons (PAHs), a widely found air pollutant in both indoor and outdoor air. This study presents the first application of the Monte Carlo population exposure assessment model to quantify the effects of different intervention strategies on inhalation exposure to PAHs and the associated lung cancer risk. The method was applied to the population in Beijing, China, in the year 2006. Several intervention strategies were designed and studied, including atmospheric cleaning, smoking prohibition indoors, use of clean fuel for cooking, enhancing ventilation while cooking and use of indoor cleaners. Their performances were quantified by population attributable fraction (PAF) and potential impact fraction (PIF) of lung cancer risk, and the changes in indoor PAH concentrations and annual inhalation doses were also calculated and compared. The results showed that atmospheric cleaning and use of indoor cleaners were the two most effective interventions. The sensitivity analysis showed that several input parameters had major influence on the modeled PAH inhalation exposure and the rankings of different interventions. The ranking was reasonably robust for the remaining majority of parameters. The method itself can be extended to other pollutants and in different places. It enables the quantitative comparison of different intervention strategies and would benefit intervention design and relevant policy making.
NASA Astrophysics Data System (ADS)
Jena, S.
2015-12-01
The overexploitation of groundwater resulted in abandoning many shallow tube wells in the river Basin in Eastern India. For the sustainability of groundwater resources, basin-scale modelling of groundwater flow is essential for the efficient planning and management of the water resources. The main intent of this study is to develope a 3-D groundwater flow model of the study basin using the Visual MODFLOW package and successfully calibrate and validate it using 17 years of observed data. The sensitivity analysis was carried out to quantify the susceptibility of aquifer system to the river bank seepage, recharge from rainfall and agriculture practices, horizontal and vertical hydraulic conductivities, and specific yield. To quantify the impact of parameter uncertainties, Sequential Uncertainty Fitting Algorithm (SUFI-2) and Markov chain Monte Carlo (MCMC) techniques were implemented. Results from the two techniques were compared and the advantages and disadvantages were analysed. Nash-Sutcliffe coefficient (NSE) and coefficient of determination (R2) were adopted as two criteria during calibration and validation of the developed model. NSE and R2 values of groundwater flow model for calibration and validation periods were in acceptable range. Also, the MCMC technique was able to provide more reasonable results than SUFI-2. The calibrated and validated model will be useful to identify the aquifer properties, analyse the groundwater flow dynamics and the change in groundwater levels in future forecasts.
A mixing-model approach to quantifying sources of organic matter to salt marsh sediments
NASA Astrophysics Data System (ADS)
Bowles, K. M.; Meile, C. D.
2010-12-01
Salt marshes are highly productive ecosystems, where autochthonous production controls an intricate exchange of carbon and energy among organisms. The major sources of organic carbon to these systems include 1) autochthonous production by vascular plant matter, 2) import of allochthonous plant material, and 3) phytoplankton biomass. Quantifying the relative contribution of organic matter sources to a salt marsh is important for understanding the fate and transformation of organic carbon in these systems, which also impacts the timing and magnitude of carbon export to the coastal ocean. A common approach to quantify organic matter source contributions to mixtures is the use of linear mixing models. To estimate the relative contributions of endmember materials to total organic matter in the sediment, the problem is formulated as a constrained linear least-square problem. However, the type of data that is utilized in such mixing models, the uncertainties in endmember compositions and the temporal dynamics of non-conservative entitites can have varying affects on the results. Making use of a comprehensive data set that encompasses several endmember characteristics - including a yearlong degradation experiment - we study the impact of these factors on estimates of the origin of sedimentary organic carbon in a saltmarsh located in the SE United States. We first evaluate the sensitivity of linear mixing models to the type of data employed by analyzing a series of mixing models that utilize various combinations of parameters (i.e. endmember characteristics such as δ13COC, C/N ratios or lignin content). Next, we assess the importance of using more than the minimum number of parameters required to estimate endmember contributions to the total organic matter pool. Then, we quantify the impact of data uncertainty on the outcome of the analysis using Monte Carlo simulations and accounting for the uncertainty in endmember characteristics. Finally, as biogeochemical processes can alter endmember characteristics over time, we investigate the effect of early diagenesis on chosen parameters, an analysis that entails an assessment of the organic matter age distribution. Thus, estimates of the relative contributions of phytoplankton, C3 and C4 plants to bulk sediment organic matter depend not only on environmental characteristics that impact reactivity, but also on sediment mixing processes.
Dupont, Anne-Laurence; Seemann, Agathe; Lavédrine, Bertrand
2012-01-30
A methodology for capillary electrophoresis/electrospray ionisation mass spectrometry (CE/ESI-MS) was developed for the simultaneous analysis of degradation products from paper among two families of compounds: low molar mass aliphatic organic acids, and aromatic (phenolic and furanic) compounds. The work comprises the optimisation of the CE separation and the ESI-MS parameters for improved sensitivity with model compounds using two successive designs of experiments. The method was applied to the analysis of lignocellulosic paper at different stages of accelerated hygrothermal ageing. The compounds of interest were identified. Most of them could be quantified and several additional analytes were separated. Copyright © 2011 Elsevier B.V. All rights reserved.
Lichte, F.E.
1995-01-01
A new method of analysis for rocks and soils is presented using laser ablation inductively coupled plasma mass spectrometry. It is based on a lithium borate fusion and the free-running mode of a Nd/YAG laser. An Ar/N2 sample gas improves sensitivity 7 ?? for most elements. Sixty-three elements are characterized for the fusion, and 49 elements can be quantified. Internal standards and isotopic spikes ensure accurate results. Limits of detection are 0.01 ??g/g for many trace elements. Accuracy approaches 5% for all elements. A new quality assurance procedure is presented that uses fundamental parameters to test relative response factors for the calibration.
Probabilistic assessment of smart composite structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Shiao, Michael C.
1994-01-01
A composite wing with spars and bulkheads is used to demonstrate the effectiveness of probabilistic assessment of smart composite structures to control uncertainties in distortions and stresses. Results show that a smart composite wing can be controlled to minimize distortions and to have specified stress levels in the presence of defects. Structural responses such as changes in angle of attack, vertical displacements, and stress in the control and controlled plies are probabilistically assessed to quantify their respective uncertainties. Sensitivity factors are evaluated to identify those parameters that have the greatest influence on a specific structural response. Results show that smart composite structures can be configured to control both distortions and ply stresses to satisfy specified design requirements.
Mission Analysis for High Specific Impulse Deep Space Exploration
NASA Technical Reports Server (NTRS)
Adams, Robert B.; Polsgrove, Tara; Brady, Hugh J. (Technical Monitor)
2002-01-01
This paper describes trajectory calculations for high specific impulse engines. Specific impulses on the order of 10,000 to 100,000 sec are predicted in a variety of fusion powered propulsion systems. This paper and its companion paper seek to build on analyses in the literature to yield an analytical routine for determining time of flight and payload fraction to a predetermined destination. The companion paper will compare the results of this analysis to the trajectories determined by several trajectory codes. The major parameters that affect time of flight and payload fraction will be identified and their sensitivities quantified. A review of existing fusion propulsion concepts and their capabilities will also be tabulated.
Determination of caffeic acid in wine using PEDOT film modified electrode.
Bianchini, C; Curulli, A; Pasquali, M; Zane, D
2014-08-01
A novel method using PEDOT (poly(3,4-ethylenedioxy) thiophene) modified electrode was developed for the determination of caffeic acid (CA) in wine. Cyclic voltammetry (CV) with the additions standard method was used to quantify the analyte at PEDOT modified electrodes. PEDOT films were electrodeposited on Platinum electrode (Pt) in aqueous medium by galvanostatic method using sodium poly(styrene-4-sulfonate) (PSS) as electrolyte and surfactant. CV allows detecting the analyte over a wide concentration range (10.0nmoll(-1)-6.5mmoll(-1)). The electrochemical method proposed showed good statistical and analytical parameters as linearity range, LOD, LOQ and sensitivity. Copyright © 2014 Elsevier Ltd. All rights reserved.
Xi, Qing; Li, Zhao-Fu; Luo, Chuan
2014-05-01
Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.
Knopman, Debra S.; Voss, Clifford I.
1987-01-01
The spatial and temporal variability of sensitivities has a significant impact on parameter estimation and sampling design for studies of solute transport in porous media. Physical insight into the behavior of sensitivities is offered through an analysis of analytically derived sensitivities for the one-dimensional form of the advection-dispersion equation. When parameters are estimated in regression models of one-dimensional transport, the spatial and temporal variability in sensitivities influences variance and covariance of parameter estimates. Several principles account for the observed influence of sensitivities on parameter uncertainty. (1) Information about a physical parameter may be most accurately gained at points in space and time with a high sensitivity to the parameter. (2) As the distance of observation points from the upstream boundary increases, maximum sensitivity to velocity during passage of the solute front increases and the consequent estimate of velocity tends to have lower variance. (3) The frequency of sampling must be “in phase” with the S shape of the dispersion sensitivity curve to yield the most information on dispersion. (4) The sensitivity to the dispersion coefficient is usually at least an order of magnitude less than the sensitivity to velocity. (5) The assumed probability distribution of random error in observations of solute concentration determines the form of the sensitivities. (6) If variance in random error in observations is large, trends in sensitivities of observation points may be obscured by noise and thus have limited value in predicting variance in parameter estimates among designs. (7) Designs that minimize the variance of one parameter may not necessarily minimize the variance of other parameters. (8) The time and space interval over which an observation point is sensitive to a given parameter depends on the actual values of the parameters in the underlying physical system.
Spatiotemporal radiotherapy planning using a global optimization approach
NASA Astrophysics Data System (ADS)
Adibi, Ali; Salari, Ehsan
2018-02-01
This paper aims at quantifying the extent of potential therapeutic gain, measured using biologically effective dose (BED), that can be achieved by altering the radiation dose distribution over treatment sessions in fractionated radiotherapy. To that end, a spatiotemporally integrated planning approach is developed, where the spatial and temporal dose modulations are optimized simultaneously. The concept of equivalent uniform BED (EUBED) is used to quantify and compare the clinical quality of spatiotemporally heterogeneous dose distributions in target and critical structures. This gives rise to a large-scale non-convex treatment-plan optimization problem, which is solved using global optimization techniques. The proposed spatiotemporal planning approach is tested on two stylized cancer cases resembling two different tumor sites and sensitivity analysis is performed for radio-biological and EUBED parameters. Numerical results validate that spatiotemporal plans are capable of delivering a larger BED to the target volume without increasing the BED in critical structures compared to conventional time-invariant plans. In particular, this additional gain is attributed to the irradiation of different regions of the target volume at different treatment sessions. Additionally, the trade-off between the potential therapeutic gain and the number of distinct dose distributions is quantified, which suggests a diminishing marginal gain as the number of dose distributions increases.
Zhang, D; Guan, Y; Fan, L; Xia, Y; Liu, S Y
2018-05-22
Objective: To quantify emphysema and air trapping at inspiratory and expiratory phase multi-slice spiral CT(MSCT) scanning in smokers without respiratory symptoms, and analyze the correlation between the CT quantifiable parameters and lung function parameters. Methods: A total of 72 smokers, who underwent medical examinations from September 2013 to September 2016 in Changzheng Hospital were enrolled in this research and were divided into two groups: 24 smokers with COPD and 48 smokers without COPD.Besides, thirty-nine non-smokers with normal pulmonary function were enrolled as the controls.All subjects underwent double phase MSCT scanning and pulmonary function tests.CT quantifiable parameters of emphysema included the low attenuation area below a threshold of -950 Hounsfield Units (HU)(LAA%(-950)), the lowest 15th percentile of the histogram of end-inspiratory attenuation values (P(15-IN)), the lowest 15th percentile of the histogram of end-expiratory attenuation values (P(15-EX)), relative volume change(RVC) and the expiratory to inspiratory ratio of mean lung density (E/I(MLD)). Pulmonary function parameters included forced expiratory volume in 1 second expressed as percent predicted (FEV(1)%), forced expiratory volume in one second to forced vital capacity ratio (FEV(1)/FVC), residual volume to total lung capacity ratio (RV/TLC) and carbon monoxide diffusion capacity corrected for alveolar volume (DLCO/VA). The differences of CT quantifiable parameters and pulmonary function parameters among the three groups were analyzed by using one-way analysis of variance or Kruskal - Wallis H test.The correlation between CT quantifiable parameters and pulmonary function parameters was analyzed by using Spearman ' s correlation analysis. Results: The differences of LAA%(-950)(the values for the controls, the group of smokers with out COPD and the group of smokers with COPD were 0.5%±0.7%, 0.7%±1.2% and 2.0%±2.4% respectively), P(15-IN)(the values of the three groups were (-892±33), (-905±15) and (-907±22) HU respectively), FEV(1)%(the values of the three groups were 88.4%±8.8%, 84.2%±7.5% and 82.1%±8.0% respectively), FEV(1)/FVC(the values of the three groups were 78.0%±3.8%, 76.6%±4.3% and 67.3%±5.5% respectively), DLCO/VA (the values of the three groups were (1.36±0.25), (1.30±0.22) and (1.21±0.22) mmol·min(-1)·kPa(-1)·L(-1) respectively) and RV/TLC (the values of the three groups were 49.5%±6.6%, 45.9%±6.0% and 53.0%±6.4% respectively) among the three groups were statistically significant (all P <0.05). In the control group, LAA%(-950) negatively correlated with FEV(1)/FVC and DLCO/VA( r =-0.32, P =0.04; r =-0.69, P =0.00) and neither did P(15-IN) with FEV(1)%( r =-0.14, P =0.02). Inversely, P(15-IN) positively correlated with DLCO/VA ( r =0.55, P =0.00). In the group of smokers without COPD, LAA%(-950) negatively correlated with FEV(1)/FVC and DLCO/VA( r =-0.31, P =0.04; r =-0.42, P =0.00), and P(15-IN) positively correlated with FEV(1)/FVC and DLCO/VA ( r =0.33, P =0.02; r =0.30, P =0.04). In the group of smokers with COPD, LAA%(-950) negatively correlated with DLCO/VA ( r =-0.62, P =0.00), but positively correlated with RV/TLC ( r =0.59, P =0.00). And P(15-IN) positively correlated with DLCO/VA( r =0.53, P =0.01). Conclusions: Smokers emphysema and air trapping can be effectively evaluated by double phase MSCT. Moreover, two of the CT quantifiable parameters, LAA%(-950) and P(15-IN), are highly sensitive to changes in pulmonary function.
Non-ignorable missingness in logistic regression.
Wang, Joanna J J; Bartlett, Mark; Ryan, Louise
2017-08-30
Nonresponses and missing data are common in observational studies. Ignoring or inadequately handling missing data may lead to biased parameter estimation, incorrect standard errors and, as a consequence, incorrect statistical inference and conclusions. We present a strategy for modelling non-ignorable missingness where the probability of nonresponse depends on the outcome. Using a simple case of logistic regression, we quantify the bias in regression estimates and show the observed likelihood is non-identifiable under non-ignorable missing data mechanism. We then adopt a selection model factorisation of the joint distribution as the basis for a sensitivity analysis to study changes in estimated parameters and the robustness of study conclusions against different assumptions. A Bayesian framework for model estimation is used as it provides a flexible approach for incorporating different missing data assumptions and conducting sensitivity analysis. Using simulated data, we explore the performance of the Bayesian selection model in correcting for bias in a logistic regression. We then implement our strategy using survey data from the 45 and Up Study to investigate factors associated with worsening health from the baseline to follow-up survey. Our findings have practical implications for the use of the 45 and Up Study data to answer important research questions relating to health and quality-of-life. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Bohnert, Sara; Vair, Cory; Mikler, John
2010-05-15
A rapid and small volume assay to quantify HI-6 in plasma was developed to further the development and licensing of an intravenous formulation of HI-6. The objective of this method was to develop a sensitive and rapid assay that clearly resolved HI-6 and an internal standard in saline and plasma matrices. A fully validated method using ion-pair HPLC and 2-PAM as the internal standard fulfilled these requirements. Small plasma samples of 35 microL were extracted using acidification, filtration and neutralization. Linearity was shown for over 4 microg/mL to 1mg/mL with accuracy and precision within 6% relative error at the lower limit of detection. This method was utilized in the pharmacokinetic analysis HI-6 dichloride (2Cl) and HI-6 dimethane sulfonate (DMS) in anaesthetized guinea pigs and domestic swine following an intravenous bolus administration. From the resultant pharmacokinetic parameters a target plasma concentration of 100 microM was established and maintained in guinea pigs receiving an intravenous infusion. This validated method allows for the analysis of low volume samples, increased sample numbers and is applicable to the determination of pharmacokinetic profiles and parameters. Copyright (c) 2010. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.
2015-01-01
Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.
Quantifying the underlying landscape and paths of cancer
Li, Chunhe; Wang, Jin
2014-01-01
Cancer is a disease regulated by the underlying gene networks. The emergence of normal and cancer states as well as the transformation between them can be thought of as a result of the gene network interactions and associated changes. We developed a global potential landscape and path framework to quantify cancer and associated processes. We constructed a cancer gene regulatory network based on the experimental evidences and uncovered the underlying landscape. The resulting tristable landscape characterizes important biological states: normal, cancer and apoptosis. The landscape topography in terms of barrier heights between stable state attractors quantifies the global stability of the cancer network system. We propose two mechanisms of cancerization: one is by the changes of landscape topography through the changes in regulation strengths of the gene networks. The other is by the fluctuations that help the system to go over the critical barrier at fixed landscape topography. The kinetic paths from least action principle quantify the transition processes among normal state, cancer state and apoptosis state. The kinetic rates provide the quantification of transition speeds among normal, cancer and apoptosis attractors. By the global sensitivity analysis of the gene network parameters on the landscape topography, we uncovered some key gene regulations determining the transitions between cancer and normal states. This can be used to guide the design of new anti-cancer tactics, through cocktail strategy of targeting multiple key regulation links simultaneously, for preventing cancer occurrence or transforming the early cancer state back to normal state. PMID:25232051
NASA Astrophysics Data System (ADS)
Johnson, Roger H.; Karau, Kelly L.; Molthen, Robert C.; Haworth, Steven T.; Dawson, Christopher A.
2000-04-01
We developed methods to quantify arterial structural and mechanical properties in excised rat lungs and applied them to investigate the distensibility decrease accompanying chronic hypoxia-induced pulmonary hypertension. Lungs of control and hypertensive (three weeks 11% O2) animals were excised and a contrast agent introduced before micro-CT imaging with a special purpose scanner. For each lung, four 3D image data sets were obtained, each at a different intra-arterial contrast agent pressure. Vessel segment diameters and lengths were measured at all levels in the arterial tree hierarchy, and these data used to generate features sensitive to distensibility changes. Results indicate that measurements obtained from 3D micro-CT images can be used to quantify vessel biomechanical properties in this rat model of pulmonary hypertension and that distensibility is reduced by exposure to chronic hypoxia. Mechanical properties can be assessed in a localized fashion and quantified in a spatially-resolved way or as a single parameter describing the tree as a whole. Micro-CT is a nondestructive way to rapidly assess structural and mechanical properties of arteries in small animal organs maintained in a physiological state. Quantitative features measured by this method may provide valuable insights into the mechanisms causing the elevated pressures in pulmonary hypertension of differing etiologies and should become increasingly valuable tools in the study of complex phenotypes in small-animal models of important diseases such as hypertension.
Attomole quantitation of protein separations with accelerator mass spectrometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogel, J S; Grant, P G; Buccholz, B A
2000-12-15
Quantification of specific proteins depends on separation by chromatography or electrophoresis followed by chemical detection schemes such as staining and fluorophore adhesion. Chemical exchange of short-lived isotopes, particularly sulfur, is also prevalent despite the inconveniences of counting radioactivity. Physical methods based on isotopic and elemental analyses offer highly sensitive protein quantitation that has linear response over wide dynamic ranges and is independent of protein conformation. Accelerator mass spectrometry quantifies long-lived isotopes such as 14C to sub-attomole sensitivity. We quantified protein interactions with small molecules such as toxins, vitamins, and natural biochemicals at precisions of 1-5% . Micro-proton-induced-xray-emission quantifies elemental abundancesmore » in separated metalloprotein samples to nanogram amounts and is capable of quantifying phosphorylated loci in gels. Accelerator-based quantitation is a possible tool for quantifying the genome translation into proteome.« less
Zuo, Houjuan; Yan, Jiangtao; Zeng, Hesong; Li, Wenyu; Li, Pengcheng; Liu, Zhengxiang; Cui, Guanglin; Lv, Jiagao; Wang, Daowen; Wang, Hong
2015-01-01
Global longitudinal strain (GLS) measured by 2-D speckle-tracking echocardiography (2-D STE) at rest has been recognized as a sensitive parameter in the detection of significant coronary artery disease (CAD). However, the diagnostic power of 2-D STE in the detection of significant CAD in patients with diabetes mellitus is unknown. Two-dimensional STE features were studied in total of 143 consecutive patients who underwent echocardiography and coronary angiography. Left ventricular global and segmental peak systolic longitudinal strains (PSLSs) were quantified by speckle-tracking imaging. In the presence of obstructive CAD (defined as stenosis ≥75%), global PSLS was significantly lower in patients with diabetes mellitus than in patients without (16.65 ± 2.29% vs. 17.32 ± 2.27%, p < 0.05). Receiver operating characteristic analysis revealed that global PSLS could effectively detect obstructive CAD in patients without diabetes mellitus (cutoff value: -18.35%, sensitivity: 78.8%, specificity: 77.5%). However, global PSLS could detect obstructive CAD in diabetic patients at a lower cutoff value with inadequate sensitivity and specificity (cutoff value: -17.15%; sensitivity: 61.1%, specificity: 52.9%). In addition, the results for segmental PSLS were similar to those for global PSLS. In conclusion, global and segmental PSLSs at rest were significantly lower in patients with both obstructive CAD and diabetes mellitus than in patients with obstructive CAD only; thus, PSLSs at rest might not be a useful parameter in the detection of obstructive CAD in patients with diabetes mellitus. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Optimization of Parameter Ranges for Composite Tape Winding Process Based on Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Yu, Tao; Shi, Yaoyao; He, Xiaodong; Kang, Chao; Deng, Bo; Song, Shibo
2017-08-01
This study is focus on the parameters sensitivity of winding process for composite prepreg tape. The methods of multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis are proposed. The polynomial empirical model of interlaminar shear strength is established by response surface experimental method. Using this model, the relative sensitivity of key process parameters including temperature, tension, pressure and velocity is calculated, while the single-parameter sensitivity curves are obtained. According to the analysis of sensitivity curves, the stability and instability range of each parameter are recognized. Finally, the optimization method of winding process parameters is developed. The analysis results show that the optimized ranges of the process parameters for interlaminar shear strength are: temperature within [100 °C, 150 °C], tension within [275 N, 387 N], pressure within [800 N, 1500 N], and velocity within [0.2 m/s, 0.4 m/s], respectively.
A Bayesian ensemble data assimilation to constrain model parameters and land-use carbon emissions
NASA Astrophysics Data System (ADS)
Lienert, Sebastian; Joos, Fortunat
2018-05-01
A dynamic global vegetation model (DGVM) is applied in a probabilistic framework and benchmarking system to constrain uncertain model parameters by observations and to quantify carbon emissions from land-use and land-cover change (LULCC). Processes featured in DGVMs include parameters which are prone to substantial uncertainty. To cope with these uncertainties Latin hypercube sampling (LHS) is used to create a 1000-member perturbed parameter ensemble, which is then evaluated with a diverse set of global and spatiotemporally resolved observational constraints. We discuss the performance of the constrained ensemble and use it to formulate a new best-guess version of the model (LPX-Bern v1.4). The observationally constrained ensemble is used to investigate historical emissions due to LULCC (ELUC) and their sensitivity to model parametrization. We find a global ELUC estimate of 158 (108, 211) PgC (median and 90 % confidence interval) between 1800 and 2016. We compare ELUC to other estimates both globally and regionally. Spatial patterns are investigated and estimates of ELUC of the 10 countries with the largest contribution to the flux over the historical period are reported. We consider model versions with and without additional land-use processes (shifting cultivation and wood harvest) and find that the difference in global ELUC is on the same order of magnitude as parameter-induced uncertainty and in some cases could potentially even be offset with appropriate parameter choice.
Pinkerton, JoAnn V; Abraham, Lucy; Bushmakin, Andrew G; Cappelleri, Joseph C; Komm, Barry S
2016-10-01
This study characterizes and quantifies the relationship of vasomotor symptoms (VMS) of menopause with menopause-specific quality of life (MSQOL) and sleep parameters to help predict treatment outcomes and inform treatment decision-making. Data were derived from a 12-week randomized, double-blind, placebo-controlled phase 3 trial that evaluated effects of two doses of conjugated estrogens/bazedoxifene on VMS in nonhysterectomized postmenopausal women (N = 318, mean age = 53.39) experiencing at least seven moderate to severe hot flushes (HFs) per day or at least 50 per week. Repeated measures models were used to determine relationships between HF frequency and severity and outcomes on the Menopause-Specific Quality of Life questionnaire and the Medical Outcomes Study sleep scale. Sensitivity analyses were performed to check assumptions of linearity between VMS and outcomes. Frequency and severity of HFs showed approximately linear relationships with MSQOL and sleep parameters. Sensitivity analyses supported assumptions of linearity. The largest changes associated with a reduction of five HFs and a 0.5-point decrease in severity occurred in the Menopause-Specific Quality of Life vasomotor functioning domain (0.78 for number of HFs and 0.98 for severity) and the Medical Outcomes Study sleep disturbance (7.38 and 4.86) and sleep adequacy (-5.60 and -4.66) domains and the two overall sleep problems indices (SPI: 5.17 and 3.63; SPII: 5.82 and 3.83). Frequency and severity of HFs have an approximately linear relationship with MSQOL and sleep parameters-that is, improvements in HFs are associated with improvements in MSQOL and sleep. Such relationships may enable clinicians to predict changes in sleep and MSQOL expected from various VMS treatments.
Miyabara, Renata; Berg, Karsten; Kraemer, Jan F; Baltatu, Ovidiu C; Wessel, Niels; Campos, Luciana A
2017-01-01
Objective: The aim of this study was to identify the most sensitive heart rate and blood pressure variability (HRV and BPV) parameters from a given set of well-known methods for the quantification of cardiovascular autonomic function after several autonomic blockades. Methods: Cardiovascular sympathetic and parasympathetic functions were studied in freely moving rats following peripheral muscarinic (methylatropine), β1-adrenergic (metoprolol), muscarinic + β1-adrenergic, α1-adrenergic (prazosin), and ganglionic (hexamethonium) blockades. Time domain, frequency domain and symbolic dynamics measures for each of HRV and BPV were classified through paired Wilcoxon test for all autonomic drugs separately. In order to select those variables that have a high relevance to, and stable influence on our target measurements (HRV, BPV) we used Fisher's Method to combine the p -value of multiple tests. Results: This analysis led to the following best set of cardiovascular variability parameters: The mean normal beat-to-beat-interval/value (HRV/BPV: meanNN), the coefficient of variation (cvNN = standard deviation over meanNN) and the root mean square differences of successive (RMSSD) of the time domain analysis. In frequency domain analysis the very-low-frequency (VLF) component was selected. From symbolic dynamics Shannon entropy of the word distribution (FWSHANNON) as well as POLVAR3, the non-linear parameter to detect intermittently decreased variability, showed the best ability to discriminate between the different autonomic blockades. Conclusion: Throughout a complex comparative analysis of HRV and BPV measures altered by a set of autonomic drugs, we identified the most sensitive set of informative cardiovascular variability indexes able to pick up the modifications imposed by the autonomic challenges. These indexes may help to increase our understanding of cardiovascular sympathetic and parasympathetic functions in translational studies of experimental diseases.
NASA Astrophysics Data System (ADS)
Thober, S.; Cuntz, M.; Mai, J.; Samaniego, L. E.; Clark, M. P.; Branch, O.; Wulfmeyer, V.; Attinger, S.
2016-12-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The agility of the models to react to different meteorological conditions is artificially constrained by having hard-coded parameters in their equations. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options in addition to the 71 standard parameters. We performed a Sobol' global sensitivity analysis to variations of the standard and hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff, their component fluxes, as well as photosynthesis and sensible heat were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Latent heat and total runoff show very similar sensitivities towards standard and hard-coded parameters. They are sensitive to both soil and plant parameters, which means that model calibrations of hydrologic or land surface models should take both soil and plant parameters into account. Sensible and latent heat exhibit almost the same sensitivities so that calibration or sensitivity analysis can be performed with either of the two. Photosynthesis has almost the same sensitivities as transpiration, which are different from the sensitivities of latent heat. Including photosynthesis and latent heat in model calibration might therefore be beneficial. Surface runoff is sensitive to almost all hard-coded snow parameters. These sensitivities get, however, diminished in total runoff. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
NASA Astrophysics Data System (ADS)
Albano, R.; Sole, A.; Mancusi, L.; Cantisani, A.; Perrone, A.
2017-12-01
The considerable increase of flood damages in the the past decades has shifted in Europe the attention from protection against floods to managing flood risks. In this context, the expected damages assessment represents a crucial information within the overall flood risk management process. The present paper proposes an open source software, called FloodRisk, that is able to operatively support stakeholders in the decision making processes with a what-if approach by carrying out the rapid assessment of the flood consequences, in terms of direct economic damage and loss of human lives. The evaluation of the damage scenarios, trough the use of the GIS software proposed here, is essential for cost-benefit or multi-criteria analysis of risk mitigation alternatives. However, considering that quantitative assessment of flood damages scenarios is characterized by intrinsic uncertainty, a scheme has been developed to identify and quantify the role of the input parameters in the total uncertainty of flood loss model application in urban areas with mild terrain and complex topography. By the concept of parallel models, the contribution of different module and input parameters to the total uncertainty is quantified. The results of the present case study have exhibited a high epistemic uncertainty on the damage estimation module and, in particular, on the type and form of the utilized damage functions, which have been adapted and transferred from different geographic and socio-economic contexts because there aren't depth-damage functions that are specifically developed for Italy. Considering that uncertainty and sensitivity depend considerably on local characteristics, the epistemic uncertainty associated with the risk estimate is reduced by introducing additional information into the risk analysis. In the light of the obtained results, it is evident the need to produce and disseminate (open) data to develop micro-scale vulnerability curves. Moreover, the urgent need to push forward research into the implementation of methods and models for the assimilation of uncertainties in decision-making processes emerges.
Uncertainty in temperature response of current consumption-based emissions estimates
NASA Astrophysics Data System (ADS)
Karstensen, J.; Peters, G. P.; Andrew, R. M.
2014-09-01
Several studies have connected emissions of greenhouse gases to economic and trade data to quantify the causal chain from consumption to emissions and climate change. These studies usually combine data and models originating from different sources, making it difficult to estimate uncertainties in the end results. We estimate uncertainties in economic data, multi-pollutant emission statistics and metric parameters, and use Monte Carlo analysis to quantify contributions to uncertainty and to determine how uncertainty propagates to estimates of global temperature change from regional and sectoral territorial- and consumption-based emissions for the year 2007. We find that the uncertainties are sensitive to the emission allocations, mix of pollutants included, the metric and its time horizon, and the level of aggregation of the results. Uncertainties in the final results are largely dominated by the climate sensitivity and the parameters associated with the warming effects of CO2. The economic data have a relatively small impact on uncertainty at the global and national level, while much higher uncertainties are found at the sectoral level. Our results suggest that consumption-based national emissions are not significantly more uncertain than the corresponding production based emissions, since the largest uncertainties are due to metric and emissions which affect both perspectives equally. The two perspectives exhibit different sectoral uncertainties, due to changes of pollutant compositions. We find global sectoral consumption uncertainties in the range of ±9-±27% using the global temperature potential with a 50 year time horizon, with metric uncertainties dominating. National level uncertainties are similar in both perspectives due to the dominance of CO2 over other pollutants. The consumption emissions of the top 10 emitting regions have a broad uncertainty range of ±9-±25%, with metric and emissions uncertainties contributing similarly. The Absolute global temperature potential with a 50 year time horizon has much higher uncertainties, with considerable uncertainty overlap for regions and sectors, indicating that the ranking of countries is uncertain.
NASA Astrophysics Data System (ADS)
Cioaca, Alexandru
A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimila- tion is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as to reducing the operating costs of measuring networks, while preserving their ability to capture the essential features of the system under consideration.
Hydrologic sensitivity of headwater catchments to climate and landscape variability
NASA Astrophysics Data System (ADS)
Kelleher, Christa; Wagener, Thorsten; McGlynn, Brian; Nippgen, Fabian; Jencso, Kelsey
2013-04-01
Headwater streams cumulatively represent an extensive portion of the United States stream network, yet remain largely unmonitored and unmapped. As such, we have limited understanding of how these systems will respond to change, knowledge that is important for preserving these unique ecosystems, the services they provide, and the biodiversity they support. We compare responses across five adjacent headwater catchments located in Tenderfoot Creek Experimental Forest in Montana, USA, to understand how local differences may affect the sensitivity of headwaters to change. We utilize global, variance-based sensitivity analysis to understand which aspects of the physical system (e.g., vegetation, topography, geology) control the variability in hydrologic behavior across these basins, and how this varies as a function of time (and therefore climate). Basin fluxes and storages, including evapotranspiration, snow water equivalent and melt, soil moisture and streamflow, are simulated using the Distributed Hydrology-Vegetation-Soil Model (DHSVM). Sensitivity analysis is applied to quantify the importance of different physical parameters to the spatial and temporal variability of different water balance components, allowing us to map similarities and differences in these controls through space and time. Our results show how catchment influences on fluxes vary across seasons (thus providing insight into transferability of knowledge in time), and how they vary across catchments with different physical characteristics (providing insight into transferability in space).
NASA Astrophysics Data System (ADS)
Lizana, A.; Foldyna, M.; Stchakovsky, M.; Georges, B.; Nicolas, D.; Garcia-Caurel, E.
2013-03-01
High sensitivity of spectroscopic ellipsometry and reflectometry for the characterization of thin films can strongly decrease when layers, typically metals, absorb a significant fraction of the light. In this paper, we propose a solution to overcome this drawback using total internal reflection ellipsometry (TIRE) and exciting a surface longitudinal wave: a plasmon-polariton. As in the attenuated total reflectance technique, TIRE exploits a minimum in the intensity of reflected transversal magnetic (TM) polarized light and enhances the sensitivity of standard methods to thicknesses of absorbing films. Samples under study were stacks of three films, ZnO : Al/Ag/ZnO : Al, deposited on glass substrates. The thickness of the silver layer varied from sample to sample. We performed measurements with a UV-visible phase-modulated ellipsometer, an IR Mueller ellipsometer and a UV-NIR reflectometer. We used the variance-covariance formalism to evaluate the sensitivity of the ellipsometric data to different parameters of the optical model. Results have shown that using TIRE doubled the sensitivity to the silver layer thickness when compared with the standard ellipsometry. Moreover, the thickness of the ZnO : Al layer below the silver layer can be reliably quantified, unlike for the fit of the standard ellipsometry data, which is limited by the absorption of the silver layer.
NASA Astrophysics Data System (ADS)
Nikolić, G. S.; Žerajić, S.; Cakić, M.
2011-10-01
Multivariate calibration method is a powerful mathematical tool that can be applied in analytical chemistry when the analytical signals are highly overlapped. The method with regression by partial least squares is proposed for the simultaneous spectrophotometric determination of adrenergic vasoconstrictors in decongestive solution containing two active components: phenyleprine hydrochloride and trimazoline hydrochloride. These sympathomimetic agents are that frequently associated in pharmaceutical formulations against the common cold. The proposed method, which is, simple and rapid, offers the advantages of sensitivity and wide range of determinations without the need for extraction of the vasoconstrictors. In order to minimize the optimal factors necessary to obtain the calibration matrix by multivariate calibration, different parameters were evaluated. The adequate selection of the spectral regions proved to be important on the number of factors. In order to simultaneously quantify both hydrochlorides among excipients, the spectral region between 250 and 290 nm was selected. A recovery for the vasoconstrictor was 98-101%. The developed method was applied to assay of two decongestive pharmaceutical preparations.
Future DUNE constraints on EFT
NASA Astrophysics Data System (ADS)
Falkowski, Adam; Grilli di Cortona, Giovanni; Tabrizi, Zahra
2018-04-01
In the near future, fundamental interactions at high-energy scales may be most efficiently studied via precision measurements at low energies. A universal language to assemble and interpret precision measurements is the so-called SMEFT, which is an effective field theory (EFT) where the Standard Model (SM) Lagrangian is extended by higher-dimensional operators. In this paper we investigate the possible impact of the DUNE neutrino experiment on constraining the SMEFT. The unprecedented neutrino flux offers an opportunity to greatly improve the current limits via precision measurements of the trident production and neutrino scattering off electrons and nuclei in the DUNE near detector. We quantify the DUNE sensitivity to dimension-6 operators in the SMEFT Lagrangian, and find that in some cases operators suppressed by an O(30) TeV scale can be probed. We also compare the DUNE reach to that of future experiments involving atomic parity violation and polarization asymmetry in electron scattering, which are sensitive to an overlapping set of SMEFT parameters.
Sensitivity vector fields in time-delay coordinate embeddings: theory and experiment.
Sloboda, A R; Epureanu, B I
2013-02-01
Identifying changes in the parameters of a dynamical system can be vital in many diagnostic and sensing applications. Sensitivity vector fields (SVFs) are one way of identifying such parametric variations by quantifying their effects on the morphology of a dynamical system's attractor. In many cases, SVFs are a more effective means of identification than commonly employed modal methods. Previously, it has only been possible to construct SVFs for a given dynamical system when a full set of state variables is available. This severely restricts SVF applicability because it may be cost prohibitive, or even impossible, to measure the entire state in high-dimensional systems. Thus, the focus of this paper is constructing SVFs with only partial knowledge of the state by using time-delay coordinate embeddings. Local models are employed in which the embedded states of a neighborhood are weighted in a way referred to as embedded point cloud averaging. Application of the presented methodology to both simulated and experimental time series demonstrates its utility and reliability.
A Bayesian inferential approach to quantify the transmission intensity of disease outbreak.
Kadi, Adiveppa S; Avaradi, Shivakumari R
2015-01-01
Emergence of infectious diseases like influenza pandemic (H1N1) 2009 has become great concern, which posed new challenges to the health authorities worldwide. To control these diseases various studies have been developed in the field of mathematical modelling, which is useful tool for understanding the epidemiological dynamics and their dependence on social mixing patterns. We have used Bayesian approach to quantify the disease outbreak through key epidemiological parameter basic reproduction number (R0), using effective contacts, defined as sum of the product of incidence cases and probability of generation time distribution. We have estimated R0 from daily case incidence data for pandemic influenza A/H1N1 2009 in India, for the initial phase. The estimated R0 with 95% credible interval is consistent with several other studies on the same strain. Through sensitivity analysis our study indicates that infectiousness affects the estimate of R0. Basic reproduction number R0 provides the useful information to the public health system to do some effort in controlling the disease by using mitigation strategies like vaccination, quarantine, and so forth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giraldi, M. R.; Francois, J. L.; Castro-Uriegas, D.
The purpose of this paper is to quantify the greenhouse gas (GHG) emissions associated to the hydrogen produced by the sulfur-iodine thermochemical process, coupled to a high temperature nuclear reactor, and to compare the results with other life cycle analysis (LCA) studies on hydrogen production technologies, both conventional and emerging. The LCA tool was used to quantify the impacts associated with climate change. The product system was defined by the following steps: (i) extraction and manufacturing of raw materials (upstream flows), (U) external energy supplied to the system, (iii) nuclear power plant, and (iv) hydrogen production plant. Particular attention wasmore » focused to those processes where there was limited information from literature about inventory data, as the TRISO fuel manufacture, and the production of iodine. The results show that the electric power, supplied to the hydrogen plant, is a sensitive parameter for GHG emissions. When the nuclear power plant supplied the electrical power, low GHG emissions were obtained. These results improve those reported by conventional hydrogen production methods, such as steam reforming. (authors)« less
Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-31
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.
Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-01
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048
Rezende, Vinícius Marcondes; Rivellis, Ariane Julio; Gomes, Melissa Medrano; Dörr, Felipe Augusto; Novaes, Mafalda Megumi Yoshinaga; Nardinelli, Luciana; Costa, Ariel Lais de Lima; Chamone, Dalton de Alencar Fisher; Bendit, Israel
2013-01-01
Objective The goal of this study was to monitor imatinib mesylate therapeutically in the Tumor Biology Laboratory, Department of Hematology and Hemotherapy, Hospital das Clínicas, Faculdade de Medicina, Universidade de São Paulo (USP). A simple and sensitive method to quantify imatinib and its metabolite (CGP74588) in human serum was developed and fully validated in order to monitor treatment compliance. Methods The method used to quantify these compounds in serum included protein precipitation extraction followed by instrumental analysis using high performance liquid chromatography coupled with mass spectrometry. The method was validated for several parameters, including selectivity, precision, accuracy, recovery and linearity. Results The parameters evaluated during the validation stage exhibited satisfactory results based on the Food and Drug Administration and the Brazilian Health Surveillance Agency (ANVISA) guidelines for validating bioanalytical methods. These parameters also showed a linear correlation greater than 0.99 for the concentration range between 0.500 µg/mL and 10.0 µg/mL and a total analysis time of 13 minutes per sample. This study includes results (imatinib serum concentrations) for 308 samples from patients being treated with imatinib mesylate. Conclusion The method developed in this study was successfully validated and is being efficiently used to measure imatinib concentrations in samples from chronic myeloid leukemia patients to check treatment compliance. The imatinib serum levels of patients achieving a major molecular response were significantly higher than those of patients who did not achieve this result. These results are thus consistent with published reports concerning other populations. PMID:23741187
Noël, Thierry
2012-01-01
We developed a new in vitro model for a multi-parameter characterization of the time course interaction of Candida fungal cells with J774 murine macrophages and human neutrophils, based on the use of combined microscopy, fluorometry, flow cytometry and viability assays. Using fluorochromes specific to phagocytes and yeasts, we could accurately quantify various parameters simultaneously in a single infection experiment: at the individual cell level, we measured the association of phagocytes to fungal cells and phagocyte survival, and monitored in parallel the overall phagocytosis process by measuring the part of ingested fungal cells among the total fungal biomass that changed over time. Candida albicans, C. glabrata, and C. lusitaniae were used as a proof of concept: they exhibited species-specific differences in their association rate with phagocytes. The fungal biomass uptaken by the phagocytes differed significantly according to the Candida species. The measure of the survival of fungal and immune cells during the interaction showed that C. albicans was the more aggressive yeast in vitro, destroying the vast majority of the phagocytes within five hours. All three species of Candida were able to survive and to escape macrophage phagocytosis either by the intraphagocytic yeast-to-hyphae transition (C. albicans) and the fungal cell multiplication until phagocytes burst (C. glabrata, C. lusitaniae), or by the avoidance of phagocytosis (C. lusitaniae). We demonstrated that our model was sensitive enough to quantify small variations of the parameters of the interaction. The method has been conceived to be amenable to the high-throughput screening of mutants in order to unravel the molecular mechanisms involved in the interaction between yeasts and host phagocytes. PMID:22479332
Raso, Alessandro; Vecchio, Donatella; Cappelli, Enrico; Ropolo, Monica; Poggi, Alessandro; Nozza, Paolo; Biassoni, Roberto; Mascelli, Samantha; Capra, Valeria; Kalfas, Fotios; Severi, Paolo; Frosina, Guido
2012-09-01
Previous studies have shown that tumor-driving glioma stem cells (GSC) may promote radio-resistance by constitutive activation of the DNA damage response started by the ataxia telangiectasia mutated (ATM) protein. We have investigated whether GSC may be specifically sensitized to ionizing radiation by inhibiting the DNA damage response. Two grade IV glioma cell lines (BORRU and DR177) were characterized for a number of immunocytochemical, karyotypic, proliferative and differentiative parameters. In particular, the expression of a panel of nine stem cell markers was quantified by reverse transcription-polymerase chain reaction (RT-PCR) and flow cytometry. Overall, BORRU and DR177 displayed pronounced and poor stem phenotypes, respectively. In order to improve the therapeutic efficacy of radiation on GSC, the cells were preincubated with a nontoxic concentration of the ATM inhibitors KU-55933 and KU-60019 and then irradiated. BORRU cells were sensitized to radiation and radio-mimetic chemicals by ATM inhibitors whereas DR177 were protected under the same conditions. No sensitization was observed after cell differentiation or to drugs unable to induce double-strand breaks (DSB), indicating that ATM inhibitors specifically sensitize glioma cells possessing stem phenotype to DSB-inducing agents. In conclusion, pharmacological inhibition of ATM may specifically sensitize GSC to DSB-inducing agents while sparing nonstem cells. © 2012 The Authors; Brain Pathology © 2012 International Society of Neuropathology.
Sensitivity of the lane change test as a measure of in-vehicle system demand.
Young, Kristie L; Lenné, Michael G; Williamson, Amy R
2011-05-01
The Lane Change Test (LCT) is one of the growing number of methods developed to quantify driving performance degradation brought about by the use of in-vehicle devices. Beyond its validity and reliability, for such a test to be of practical use, it must also be sensitive to the varied demands of individual tasks. The current study evaluated the ability of several recent LCT lateral control and event detection parameters to discriminate between visual-manual and cognitive surrogate In-Vehicle Information System tasks with different levels of demand. Twenty-seven participants (mean age 24.4 years) completed a PC version of the LCT while performing visual search and math problem solving tasks. A number of the lateral control metrics were found to be sensitive to task differences, but the event detection metrics were less able to discriminate between tasks. The mean deviation and lane excursion measures were able to distinguish between the visual and cognitive tasks, but were less sensitive to the different levels of task demand. The other LCT metrics examined were less sensitive to task differences. A major factor influencing the sensitivity of at least some of the LCT metrics could be the type of lane change instructions given to participants. The provision of clear and explicit lane change instructions and further refinement of its metrics will be essential for increasing the utility of the LCT as an evaluation tool. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Martinez, G T; Rosenauer, A; De Backer, A; Verbeeck, J; Van Aert, S
2014-02-01
High angle annular dark field scanning transmission electron microscopy (HAADF STEM) images provide sample information which is sensitive to the chemical composition. The image intensities indeed scale with the mean atomic number Z. To some extent, chemically different atomic column types can therefore be visually distinguished. However, in order to quantify the atomic column composition with high accuracy and precision, model-based methods are necessary. Therefore, an empirical incoherent parametric imaging model can be used of which the unknown parameters are determined using statistical parameter estimation theory (Van Aert et al., 2009, [1]). In this paper, it will be shown how this method can be combined with frozen lattice multislice simulations in order to evolve from a relative toward an absolute quantification of the composition of single atomic columns with mixed atom types. Furthermore, the validity of the model assumptions are explored and discussed. © 2013 Published by Elsevier B.V. All rights reserved.
Normalized Polarization Ratios for the Analysis of Cell Polarity
Shimoni, Raz; Pham, Kim; Yassin, Mohammed; Ludford-Menting, Mandy J.; Gu, Min; Russell, Sarah M.
2014-01-01
The quantification and analysis of molecular localization in living cells is increasingly important for elucidating biological pathways, and new methods are rapidly emerging. The quantification of cell polarity has generated much interest recently, and ratiometric analysis of fluorescence microscopy images provides one means to quantify cell polarity. However, detection of fluorescence, and the ratiometric measurement, is likely to be sensitive to acquisition settings and image processing parameters. Using imaging of EGFP-expressing cells and computer simulations of variations in fluorescence ratios, we characterized the dependence of ratiometric measurements on processing parameters. This analysis showed that image settings alter polarization measurements; and that clustered localization is more susceptible to artifacts than homogeneous localization. To correct for such inconsistencies, we developed and validated a method for choosing the most appropriate analysis settings, and for incorporating internal controls to ensure fidelity of polarity measurements. This approach is applicable to testing polarity in all cells where the axis of polarity is known. PMID:24963926
NASA Astrophysics Data System (ADS)
Miecznik, Grzegorz; Illing, Rainer; Petroy, Shelley; Sokolik, Irina N.
2005-07-01
Linearly polarized radiation is sensitive to the microphysical properties of aerosols, namely, to the particle- size distribution and refractive index. The discriminating power of polarized radiation increases strongly with the increasing range of scattering angles and the addition of multiple wavelengths. The polarization and directionality of the Earth's reflectances (POLDER) missions demonstrate that some aerosol properties can be successfully derived from spaceborne polarimetric, multiangular measurements at two visible wavelengths. We extend the concept to analyze the retrieval capabilities of a spaceborne instrument with six polarimetric channels at 412, 445, 555, 865, 1250, and 2250 nm, measuring approximately 100 scattering angles covering a range between 50 and 150 deg. Our focus is development of an analysis methodology that can help quantify the benefits of such multiangular and multispectral polarimetric measurements. To that goal we employ a sensitivity metric approach in a framework of the principal-component analysis. The radiances and noise used to construct the sensitivity metric are calculated with the realistic solar flux for representative orbital viewing geometries, accounting for surface reflection from the ground, and statistical and calibration errors of a notional instrument. Spherical aerosol particles covering a range of representative microphysical properties (effective radius, effective variance, real and imaginary parts of the refractive index, single-scattering albedo) are considered in the calculations. We find that there is a limiting threshold for the effective size (approximately 0.7 μm), below which the weak scattering intensity results in a decreased signal-to-noise ratio and minimal polarization sensitivity, precluding reliable aerosol retrievals. For such small particles, close to the Rayleigh scattering limit, the total intensity provides a much stronger aerosol signature than the linear polarization, inspiring retrieval when the combined signals of intensities and the polarization fraction are used. We also find a strong correlation between aerosol parameters, in particular between the effective size and the variance, which forces one to simultaneously retrieve at least these two parameters.
NASA Astrophysics Data System (ADS)
Stippich, Christian; Krob, Florian; Glasmacher, Ulrich Anton; Hackspacher, Peter Christian
2017-04-01
The aim of the research is to quantify the long-term evolution of the western South Atlantic passive continental margin (SAPCM) in SE-Brazil. Excellent onshore outcrop conditions and extensive pre-rift to post-rift archives between São Paulo and Laguna allow a high precision quantification of exhumation, and rock uplift rates, influencing physical parameters, long-term acting forces, and process-response systems. The research integrates published (Karl et al., 2013) and partly published thermochronological data from Brazil, and test lately published new concepts on causes of long-term landscape and lithospheric evolution in southern Brazil. Six distinct lithospheric blocks (Laguna, Florianópolis, Curitiba, Ilha Comprida, Peruibe and Santos), which are separated by fracture zones (Karl et al., 2013) are characterized by individual thermochronological age spectra. Furthermore, the thermal evolution derived by numerical modeling indicates variable post-rift exhumation histories of these blocks. In this context, we will provide information on the causes for the complex exhumation history of the Florianópolis, and adjacent blocks. Following up on our latest publication (Braun et al., 2016) regarding the effect of variability in rock thermal conductivity on exhumation rate estimates we performed a sensitivity analysis to quantify the effect of a differentiated lithospheric crust on the thermal evolution of the Florianópolis block versus exhumation rates estimated from modelling a lithospheric uniform crustal block. The long-term landscape evolution models with process rates were computed with the software code PECUBE (Braun, 2003; Braun et al., 2012). Testing model solutions obtained for a multidimensional parameter space against the real thermochronological and geomorphological data set, the most likely combinations of parameters, values, and rates can be constrained. References Braun, J., 2003. Pecube: A new finite element code to solve the 3D heat transport equation including the effects of a time-varying, finite amplitude surface topography. Computers and Geosciences, v.29, pp.787-794. Braun, J., Stippich, C., Glasmacher, U. A., 2016. The effect of variability in rock thermal conductivity on exhumation rate estimates from thermochronological data. Tectonophysics, v.690, pp.288-297 Braun, J., van der Beek, P., Valla, P., Robert, X., Herman, F., Goltzbacj, C., Pedersen, V., Perry, C., Simon-Labric, T., Prigent, C., 2012. Quantifying rates of landscape evolution and tectonic processes by thermochronology and numerical modeling of crustal heat transport using PECUBE. Tectonophysics, v.524-525, pp.1-28. Karl, M., Glasmacher, U.A., Kollenz, S., Franco-Magalhaes, A.O.B., Stockli, D.F., Hackspacher, P., 2013. Evolution of the South Atlantic passive continental margin in southern Brazil derived from zircon and apatite (U-Th-Sm)/He and fission-track data. Tectonophysics, Volume 604, Pages 224-244.
Buller, G; Lutman, M E
1998-08-01
The increasing use of transiently evoked otoacoustic emissions (TEOAE) in large neonatal hearing screening programmes makes a standardized method of response classification desirable. Until now methods have been either subjective or based on arbitrary response characteristics. This study takes an expert system approach to standardize the subjective judgements of an experienced scorer. The method that is developed comprises three stages. First, it transforms TEOAEs from waveforms in the time domain into a simplified parameter set. Second, the parameter set is classified by an artificial neural network that has been taught on a large database TEOAE waveforms and corresponding expert scores. Third, additional fuzzy logic rules automatically detect probable artefacts in the waveforms and synchronized spontaneous emission components. In this way, the knowledge of the experienced scorer is encapsulated in the expert system software and thereafter can be accessed by non-experts. Teaching and evaluation of the neural network was based on TEOAEs from a database totalling 2190 neonatal hearing screening tests. The database was divided into learning and test groups with 820 and 1370 waveforms respectively. From each recorded waveform a set of 12 parameters was calculated, representing signal static and dynamic properties. The artifical network was taught with parameter sets of only the learning groups. Reproduction of the human scorer classification by the neural net in the learning group showed a sensitivity for detecting screen fails of 99.3% (299 from 301 failed results on subjective scoring) and a specificity for detecting screen passes of 81.1% (421 of 519 pass results). To quantify the post hoc performance of the net (generalization), the test group was then presented to the network input. Sensitivity was 99.4% (474 from 477) and specificity was 87.3% (780 from 893). To check the efficiency of the classification method, a second learning group was selected out of the previous test group, and the previous learning group was used as the test group. Repeating learning and test procedures yielded 99.3% sensitivity and 80.7% specificity for reproduction, and 99.4% sensitivity and 86.7% specificity for generalization. In all respects, performance was better than for a previously optimized method based simply on cross-correlation between replicate non-linear waveforms. It is concluded that classification methods based on neural networks show promise for application to large neonatal screening programmes utilizing TEOAEs.
NASA Astrophysics Data System (ADS)
Harper, E. B.; Stella, J. C.; Fremier, A. K.
2009-12-01
Fremont cottonwood (Populus fremontii) is an important component of semi-arid riparian ecosystems throughout western North America, but its populations are in decline due to flow regulation. Achieving a balance between human resource needs and riparian ecosystem function requires a mechanistic understanding of the multiple geomorphic and biological factors affecting tree recruitment and survival, including the timing and magnitude of river flows, and the concomitant influence on suitable habitat creation and mortality from scour and sedimentation burial. Despite a great deal of empirical research on some components of the system, such as factors affecting cottonwood recruitment, other key components are less studied. Yet understanding the relative influence of the full suite of physical and life-history drivers is critical to modeling whole-population dynamics under changing environmental conditions. We addressed these issues for the Fremont cottonwood population along the Sacramento River, CA using a sensitivity analysis approach to quantify uncertainty in parameters on the outcomes of a patch-based, dynamic population model. Using a broad range of plausible values for 15 model parameters that represent key physical, biological and climatic components of the ecosystem, we ran 1,000 population simulations that consisted of a subset of 14.3 million possible combinations of parameter estimates to predict the frequency of patch colonization and total forest habitat predicted to occur under current hydrologic conditions after 175 years. Results indicate that Fremont cottonwood populations are highly sensitive to the interactions among flow regime, sedimentation rate and the depth of the capillary fringe (Fig. 1). Estimates of long-term floodplain sedimentation rate would substantially improve model accuracy. Spatial variation in sediment texture was also important to the extent that it determines the depth of the capillary fringe, which regulates the availability of water for germination and adult tree growth. Our sensitivity analyses suggest that models of future scenarios should incorporate regional climate change projections because changes in temperature and the timing and volume of precipitation affects sensitive aspects of the system, including the timing of seed release and spring snowmelt runoff. Figure 1. The relative effects on model predictions of uncertainty around each parameter included in the patch-based population model for Fremont cottonwood.
NASA Astrophysics Data System (ADS)
Cargill, Allison A.; Neil, Kathrine M.; Hondred, John A.; McLamore, Eric S.; Claussen, Jonathan C.
2016-05-01
Enhanced interest in wearable biosensor technology over the past decade is directly related to the increasing prevalence of diabetes and the associated requirement of daily blood glucose monitoring. In this work we investigate the platinum-carbon transduction element used in traditional first-generation glucose biosensors which rely on the concentration of hydrogen peroxide produced by the glucose-glucose oxidase binding scheme. We electrodeposit platinum nanoparticles on a commercially-available screen printed carbon electrode by stepping an applied current between 0 and 7.12 mA/cm2 for a varying number of cycles. Next, we examine the trends in deposition and the effect that the number of deposition cycles has on the sensitivity of electrochemical glucose sensing. Results from this work indicate that applying platinum nanoparticles to screen printed carbon via electrodeposition from a metal salt solution improves overall biosensor sensitivity. This work also pinpoints the amount of platinum (i.e., number of deposition cycles) that maximizes biosensor sensitivity in an effort to minimize the use of the precious metals, viz., platinum, in electrode fabrication. In summary, this work quantifies the relationship between platinum electrodeposition and sensor performance, which is crucial in designing and producing cost-effective sensors.
Predicting a contact's sensitivity to initial conditions using metrics of frictional coupling
Flicek, Robert C.; Hills, David A.; Brake, Matthew Robert W.
2016-09-29
This paper presents a method for predicting how sensitive a frictional contact’s steady-state behavior is to its initial conditions. Previous research has proven that if a contact is uncoupled, i.e. if slip displacements do not influence the contact pressure distribution, then its steady-state response is independent of initial conditions, but if the contact is coupled, the steady-state response depends on initial conditions. In this paper, two metrics for quantifying coupling in discrete frictional systems are examined. These metrics suggest that coupling is dominated by material dissimilarity due to Dundurs’ composite material parameter β when β ≥ 0.2, but geometric mismatchmore » becomes the dominant source of coupling for smaller values of β. Based on a large set of numerical simulations with different contact geometries, material combinations, and friction coefficients, a contact’s sensitivity to initial conditions is found to be correlated with the product of the coupling metric and the friction coefficient. For cyclic shear loading, this correlation is maintained for simulations with different contact geometries, material combinations, and friction coefficients. Furthermore, for cyclic bulk loading, the correlation is only maintained when the contact edge angle is held constant.« less
Predicting a contact's sensitivity to initial conditions using metrics of frictional coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flicek, Robert C.; Hills, David A.; Brake, Matthew Robert W.
This paper presents a method for predicting how sensitive a frictional contact’s steady-state behavior is to its initial conditions. Previous research has proven that if a contact is uncoupled, i.e. if slip displacements do not influence the contact pressure distribution, then its steady-state response is independent of initial conditions, but if the contact is coupled, the steady-state response depends on initial conditions. In this paper, two metrics for quantifying coupling in discrete frictional systems are examined. These metrics suggest that coupling is dominated by material dissimilarity due to Dundurs’ composite material parameter β when β ≥ 0.2, but geometric mismatchmore » becomes the dominant source of coupling for smaller values of β. Based on a large set of numerical simulations with different contact geometries, material combinations, and friction coefficients, a contact’s sensitivity to initial conditions is found to be correlated with the product of the coupling metric and the friction coefficient. For cyclic shear loading, this correlation is maintained for simulations with different contact geometries, material combinations, and friction coefficients. Furthermore, for cyclic bulk loading, the correlation is only maintained when the contact edge angle is held constant.« less
Ichinokawa, Momoko; Okamura, Hiroshi; Watanabe, Chikako; Kawabata, Atsushi; Oozeki, Yoshioki
2015-09-01
Restricting human access to a specific wildlife species, community, or ecosystem, i.e., input control, is one of the most popular tools to control human impacts for natural resource management and wildlife conservation. However, quantitative evaluations of input control are generally difficult, because it is unclear how much human impacts can actually be reduced by the control. We present a model framework to quantify the effectiveness of input control using day closures to reduce actual fishing impact by considering the observed fishery dynamics. The model framework was applied to the management of the Pacific stock of the chub mackerel (Scomber japonicus) fishery, in which fishing was suspended for one day following any day when the total mackerel catch exceeded a threshold level. We evaluated the management measure according to the following steps: (1) we fitted the daily observed catch and fishing effort data to a generalized linear model (GLM) or generalized autoregressive state-space model (GASSM), (2) we conducted population dynamics simulations based on annual catches randomly generated from the parameters estimated in the first step, (3) we quantified the effectiveness of day closures by comparing the results of two simulation scenarios with and without day closures, and (4) we conducted additional simulations based on different sets of explanatory variables and statistical models (sensitivity analysis). In the first step, we found that the GASSM explained the observed data far better than the simple GLM. The model parameterized with the estimates from the GASSM demonstrated that the day closures implemented from 2004 to 2009 would have decreased exploitation fractions by ~10% every year and increased the 2009 stock biomass by 37-46% (median), relative to the values without day closures. The sensitivity analysis revealed that the effectiveness of day closures was particularly influenced by autoregressive processes in the fishery data and by positive relationships between fishing effort and total biomass. Those results indicated the importance of human behavioral dynamics under input control in quantifying the conservation benefit of natural resource management and the applicability of our model framework to the evaluation of the input controls that are actually implemented.
Wang, Yutang; Liu, Yuanyuan; Xiao, Chunxia; Liu, Laping; Hao, Miao; Wang, Jianguo; Liu, Xuebo
2014-06-01
This study established a new method for quantitative and qualitative determination of certain components in black rice wine, a traditional Chinese brewed wine. Specifically, we combined solid-phase extraction and high-performance liquid chromatography (HPLC) with triple quadrupole mass spectrometry (MS/MS) to determine 8 phenolic acids, 3 flavonols, and 4 anthocyanins in black rice wine. First, we clean samples with OASIS HLB cartridges and optimized extraction parameters. Next, we performed separation on a SHIM-PACK XR-ODS column (I.D. 3.0 mm × 75 mm, 2.2 μm particle size) with a gradient elution of 50% aqueous acetonitrile (V/V) and water, both containing 0.2% formic acid. We used multiple-reaction monitoring scanning for quantification, with switching electrospray ion source polarity between positive and negative modes in a single chromatographic run. We detected 15 phenolic compounds properly within 38 min under optimized conditions. Limits of detection ranged from 0.008 to 0.030 mg/L, and average recoveries ranged from 60.8 to 103.1% with relative standard deviation ≤8.6%. We validated the method and found it to be sensitive and reliable for quantifying phenolic compounds in rice wine matrices. This study developed a new, reliable HPLC-MS/MS method for simultaneous determination of 15 bioactive components in black rice wine. This method was validated and found to be sensitive and reliable for quantifying phenolic compounds in rice wine. © 2014 Institute of Food Technologists®
Dynamically Coupled Food-web and Hydrodynamic Modeling with ADH-CASM
NASA Astrophysics Data System (ADS)
Piercy, C.; Swannack, T. M.
2012-12-01
Oysters and freshwater mussels are "ecological engineers," modifying the local water quality by filtering zooplankton and other suspended particulate matter from the water column and flow hydraulics by impinging on the near-bed flow environment. The success of sessile, benthic invertebrates such as oysters depends on environmental factors including but not limited to temperature, salinity, and flow regime. Typically food-web and other types of ecological models use flow and water quality data as direct input without regard to the feedback between the ecosystem and the physical environment. The USACE-ERDC has developed a coupled hydrodynamic-ecological modeling approach that dynamically couples a 2-D hydrodynamic and constituent transport model, Adaptive Hydraulics (ADH), with a bioenergetics food-web model, the Comprehensive Aquatics Systems Model (CASM), which captures the dynamic feedback between aquatic ecological systems and the environment. We present modeling results from restored oyster reefs in the Great Wicomico River on the western shore of the Chesapeake Bay, which quantify ecosystem services such as the influence of the benthic ecosystem on water quality. Preliminary results indicate that while the influence of oyster reefs on bulk flow dynamics is limited due to the localized influence of oyster reefs, large reefs and the associated benthic ecosystem can create measurable changes in the concentrations of nitrogen, phosphorus, and carbon in the areas around reefs. We also present a sensitivity analysis to quantify the relative sensitivity of the coupled ADH-CASM model to both hydrodynamic and ecological parameter choice.
Impacts of urbanisation on urban-rural water cycle: a China case study
NASA Astrophysics Data System (ADS)
Wang, Mingna; Singh, Shailesh Kumar; Zhang, Jun-e.; Khu, Soon Thiam
2016-04-01
Urbanization, which essentially create more impervious surface, is an inevitable part of modern societal development throughout the world. It produces several changes in the natural hydrological cycle by adding several processes. A better understanding of the impacts of urbanization, will allow policy makers to balance development and environment sustainability needs. It also helps underdeveloped countries make strategic decisions in their development process. The objective of this study is to understand and quantify the sensitivity of the urban-rural water cycle to urbanisation. A coupled hydrological model, MODCYCLE, was set up to simulate the effect of changes in landuse on daily streamflow and groundwater and applied to the Tianjin municipality, a rapidly urbanising mega-city on the east coast of China. The model uses landuse, land cover, soil, meteorological and climatic data to represent important parameters in the catchment. The fraction of impervious surface was used as a surrogate to quantify the degree of landuse change. In this work, we analysed the water cycle process under current urbanization situation in Tianjin. A number of different future development scenarios on based on increasing urbanisation intensity is explored. The results show that the expansion of urban areas had a great influence on generation of flow process and on ET, and the surface runoff was most sensitive to urbanisation. The results of these scenarios-based study about future urbanisation on hydrological system will help planners and managers in taking proper decisions regarding sustainable development.
Hafian, Hilal; Venteo, Lydie; Sukhanova, Alyona; Nabiev, Igor; Lefevre, Benoît; Pluot, Michel
2004-06-01
Human DNA topoisomerase I (topo I) is the molecular target of the camptothecin group of anticancer drugs. Laboratory studies have shown that the cellular response to topo I-targeted drugs depends on the topo I expression and DNA replication rate and the apoptotic pathway activity. In this study, we tested potential indicators of the sensitivity of topo I-targeted drugs in 36 cases of oral squamous cell carcinoma (OSCC). Formalin-fixed, paraffin-embedded tissue sections were immunostained with monoclonal antibodies against Ki-67, p53, and topo I, and with polyclonal antibodies against DNA topoisomerase II-alpha (topo II-alpha). These markers were also tested in 18 epithelial hyperplastic lesions and 18 mild dysplasias. Immunostaining was quantified by the percentage of stained nuclei in each sample (the labeling index); 200 immunoreactive epithelial nuclei were counted per case for each antibody. The results support the possibility of using topo II-alpha staining for assessing the proliferative activity. High expression of topo II-alpha and topo I in OSCCs suggests that they may serve as potential indicators of sensitivity to topo I inhibitors. However, the apoptotic pathway assessed by p53 immunostaining was found to be uninformative. Analysis of the relationship between immunohistochemical results and clinical and pathologic parameters (the T and N stages and differentiation) showed that only the differentiation parameter correlated with the topo I expression rate. Thus, significant increase in the topo I expression in the poorly differentiated OSCCs suggests their higher sensitivity to drug treatment.
Liu, G.; Van der Mark, E. J.; Verberk, J. Q. J. C.; Van Dijk, J. C.
2013-01-01
The objective of this study was to evaluate the application of flow cytometry total cell counts (TCCs) as a parameter to assess microbial growth in drinking water distribution systems and to determine the relationships between different parameters describing the biostability of treated water. A one-year sampling program was carried out in two distribution systems in The Netherlands. Results demonstrated that, in both systems, the biomass differences measured by ATP were not significant. TCC differences were also not significant in treatment plant 1, but decreased slightly in treatment plant 2. TCC values were found to be higher at temperatures above 15°C than at temperatures below 15°C. The correlation study of parameters describing biostability found no relationship among TCC, heterotrophic plate counts, and Aeromonas. Also no relationship was found between TCC and ATP. Some correlation was found between the subgroup of high nucleic acid content bacteria and ATP (R 2 = 0.63). Overall, the results demonstrated that TCC is a valuable parameter to assess the drinking water biological quality and regrowth; it can directly and sensitively quantify biomass, detect small changes, and can be used to determine the subgroup of active HNA bacteria that are related to ATP. PMID:23819117
Inter-Individual Variability in High-Throughput Risk ...
We incorporate realistic human variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which have little or no existing TK data. Chemicals are prioritized based on model estimates of hazard and exposure, to decide which chemicals should be first in line for further study. Hazard may be estimated with in vitro HT screening assays, e.g., U.S. EPA’s ToxCast program. Bioactive ToxCast concentrations can be extrapolated to doses that produce equivalent concentrations in body tissues using a reverse TK approach in which generic TK models are parameterized with 1) chemical-specific parameters derived from in vitro measurements and predicted from chemical structure; and 2) with physiological parameters for a virtual population. Here we draw physiological parameters from realistic estimates of distributions of demographic and anthropometric quantities in the modern U.S. population, based on the most recent CDC NHANES data. A Monte Carlo approach, accounting for the correlation structure in physiological parameters, is used to estimate ToxCast equivalent doses for the most sensitive portion of the population. To quantify risk, ToxCast equivalent doses are compared to estimates of exposure rates based on Bayesian inferences drawn from NHANES urinary analyte biomonitoring data. The inclusion
The physical and biological basis of quantitative parameters derived from diffusion MRI
2012-01-01
Diffusion magnetic resonance imaging is a quantitative imaging technique that measures the underlying molecular diffusion of protons. Diffusion-weighted imaging (DWI) quantifies the apparent diffusion coefficient (ADC) which was first used to detect early ischemic stroke. However this does not take account of the directional dependence of diffusion seen in biological systems (anisotropy). Diffusion tensor imaging (DTI) provides a mathematical model of diffusion anisotropy and is widely used. Parameters, including fractional anisotropy (FA), mean diffusivity (MD), parallel and perpendicular diffusivity can be derived to provide sensitive, but non-specific, measures of altered tissue structure. They are typically assessed in clinical studies by voxel-based or region-of-interest based analyses. The increasing recognition of the limitations of the diffusion tensor model has led to more complex multi-compartment models such as CHARMED, AxCaliber or NODDI being developed to estimate microstructural parameters including axonal diameter, axonal density and fiber orientations. However these are not yet in routine clinical use due to lengthy acquisition times. In this review, I discuss how molecular diffusion may be measured using diffusion MRI, the biological and physical bases for the parameters derived from DWI and DTI, how these are used in clinical studies and the prospect of more complex tissue models providing helpful micro-structural information. PMID:23289085
NASA Astrophysics Data System (ADS)
Bennett, Katrina E.; Urrego Blanco, Jorge R.; Jonko, Alexandra; Bohn, Theodore J.; Atchley, Adam L.; Urban, Nathan M.; Middleton, Richard S.
2018-01-01
The Colorado River Basin is a fundamentally important river for society, ecology, and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent, and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model. We combine global sensitivity analysis with a space-filling Latin Hypercube Sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach. We find that snow-dominated regions are much more sensitive to uncertainties in VIC parameters. Although baseflow and runoff changes respond to parameters used in previous sensitivity studies, we discover new key parameter sensitivities. For instance, changes in runoff and evapotranspiration are sensitive to albedo, while changes in snow water equivalent are sensitive to canopy fraction and Leaf Area Index (LAI) in the VIC model. It is critical for improved modeling to narrow uncertainty in these parameters through improved observations and field studies. This is important because LAI and albedo are anticipated to change under future climate and narrowing uncertainty is paramount to advance our application of models such as VIC for water resource management.
Effect of microstructure on the elasto-viscoplastic deformation of dual phase titanium structures
NASA Astrophysics Data System (ADS)
Ozturk, Tugce; Rollett, Anthony D.
2018-02-01
The present study is devoted to the creation of a process-structure-property database for dual phase titanium alloys, through a synthetic microstructure generation method and a mesh-free fast Fourier transform based micromechanical model that operates on a discretized image of the microstructure. A sensitivity analysis is performed as a precursor to determine the statistically representative volume element size for creating 3D synthetic microstructures based on additively manufactured Ti-6Al-4V characteristics, which are further modified to expand the database for features of interest, e.g., lath thickness. Sets of titanium hardening parameters are extracted from literature, and The relative effect of the chosen microstructural features is quantified through comparisons of average and local field distributions.
Acoustic-gravity waves in atmospheric and oceanic waveguides.
Godin, Oleg A
2012-08-01
A theory of guided propagation of sound in layered, moving fluids is extended to include acoustic-gravity waves (AGWs) in waveguides with piecewise continuous parameters. The orthogonality of AGW normal modes is established in moving and motionless media. A perturbation theory is developed to quantify the relative significance of the gravity and fluid compressibility as well as sensitivity of the normal modes to variations in sound speed, flow velocity, and density profiles and in boundary conditions. Phase and group speeds of the normal modes are found to have certain universal properties which are valid for waveguides with arbitrary stratification. The Lamb wave is shown to be the only AGW normal mode that can propagate without dispersion in a layered medium.
Visualizing Chemical Interaction Dynamics of Confined DNA Molecules
NASA Astrophysics Data System (ADS)
Henkin, Gilead; Berard, Daniel; Stabile, Frank; Leslie, Sabrina
We present a novel nanofluidic approach to controllably introducing reagent molecules to interact with confined biopolymers and visualizing the reaction dynamics in real time. By dynamically deforming a flow cell using CLiC (Convex Lens-induced Confinement) microscopy, we are able to tune reaction chamber dimensions from micrometer to nanometer scales. We apply this gentle deformation to load and extend DNA polymers within embedded nanotopographies and visualize their interactions with other molecules in solution. Quantifying the change in configuration of polymers within embedded nanotopographies in response to binding/unbinding of reagent molecules provides new insights into their consequent change in physical properties. CLiC technology enables an ultra sensitive, massively parallel biochemical analysis platform which can acces a broader range of interaction parameters than existing devices.
Normal Stresses, Contraction, and Stiffening in Sheared Elastic Networks
NASA Astrophysics Data System (ADS)
Baumgarten, Karsten; Tighe, Brian P.
2018-04-01
When elastic solids are sheared, a nonlinear effect named after Poynting gives rise to normal stresses or changes in volume. We provide a novel relation between the Poynting effect and the microscopic Grüneisen parameter, which quantifies how stretching shifts vibrational modes. By applying this relation to random spring networks, a minimal model for, e.g., biopolymer gels and solid foams, we find that networks contract or develop tension because they vibrate faster when stretched. The amplitude of the Poynting effect is sensitive to the network's linear elastic moduli, which can be tuned via its preparation protocol and connectivity. Finally, we show that the Poynting effect can be used to predict the finite strain scale where the material stiffens under shear.
Towards simplification of hydrologic modeling: Identification of dominant processes
Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.
2016-01-01
The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many
NASA Astrophysics Data System (ADS)
Laasanen, Mikko S.; Saarakkala, Simo; Töyräs, Juha; Rieppo, Jarno; Jurvelin, Jukka S.
2005-07-01
Previous quantitative 2D-ultrasound imaging studies have demonstrated that the ultrasound reflection measurement of articular cartilage surface sensitively detects degradation of the collagen network, whereas digestion of cartilage proteoglycans has no significant effect on the ultrasound reflection. In this study, the first aim was to characterize the ability of quantitative 2D-ultrasound imaging to detect site-specific differences in ultrasound reflection and backscattering properties of cartilage surface and cartilage-bone interface at visually healthy bovine knee (n = 30). As a second aim, we studied factors controlling ultrasound reflection properties of an intact cartilage surface. The ultrasound reflection coefficient was determined in time (R) and frequency domains (IRC) at medial femoral condyle, lateral patello-femoral groove, medial tibial plateau and patella using a 20 MHz ultrasound imaging instrument. Furthermore, cartilage surface roughness was quantified by calculating the ultrasound roughness index (URI). The superficial collagen content of the cartilage was determined using a FT-IRIS-technique. A significant site-dependent variation was shown in cartilage thickness, ultrasound reflection parameters, URI and superficial collagen content. As compared to R and IRC, URI was a more sensitive parameter in detecting differences between the measurement sites. Ultrasound reflection parameters were not significantly related to superficial collagen content, whereas the correlation between R and URI was high. Ultrasound reflection at the cartilage-bone interface showed insignificant site-dependent variation. The current results suggest that ultrasound reflection from the intact cartilage surface is mainly dependent on the cartilage surface roughness and the collagen content has a less significant role.
NASA Astrophysics Data System (ADS)
Zhu, Yueying; Alexandre Wang, Qiuping; Li, Wei; Cai, Xu
2017-09-01
The formation of continuous opinion dynamics is investigated based on a virtual gambling mechanism where agents fight for a limited resource. We propose a model with agents holding opinions between -1 and 1. Agents are segregated into two cliques according to the sign of their opinions. Local communication happens only when the opinion distance between corresponding agents is no larger than a pre-defined confidence threshold. Theoretical analysis regarding special cases provides a deep understanding of the roles of both the resource allocation parameter and confidence threshold in the formation of opinion dynamics. For a sparse network, the evolution of opinion dynamics is negligible in the region of low confidence threshold when the mindless agents are absent. Numerical results also imply that, in the presence of economic agents, high confidence threshold is required for apparent clustering of agents in opinion. Moreover, a consensus state is generated only when the following three conditions are satisfied simultaneously: mindless agents are absent, the resource is concentrated in one clique, and confidence threshold tends to a critical value(=1.25+2/ka ; k_a>8/3 , the average number of friends of individual agents). For fixed a confidence threshold and resource allocation parameter, the most chaotic steady state of the dynamics happens when the fraction of mindless agents is about 0.7. It is also demonstrated that economic agents are more likely to win at gambling, compared to mindless ones. Finally, the importance of three involved parameters in establishing the uncertainty of model response is quantified in terms of Latin hypercube sampling-based sensitivity analysis.
Hu, Yida; Ahmad, Salahuddin; Ali, Imad
2012-01-01
With increasing popularity and complexity of intensity-modulated radiation therapy (IMRT) delivery modalities including regular and arc therapies, there is a growing challenge for validating the accuracy of dose distributions. Gafchromic films have superior characteristics for dose verification over other conventional dosimeters. In order to optimize the use of Gafchromic films in clinical IMRT quality assurance procedures, the scanning parameters of EBT and EBT2 films with a flatbed scanner were investigated. The effects of several parameters including scanning position, orientation, uniformity, film sensitivity and optical density (OD) growth after irradiation were quantified. The profiles of the EBT and EBT2 films had a noise level of 0.6% and 0.7%, respectively. Considerable orientation dependence was observed and the scanner value difference between landscape and portrait modes were about 12% and 10% for EBT and EBT2 films, respectively. The highest response sensitivity was observed using digitized red color images of the EBT2 film scanned with landscape mode. The total system non-uniformity composed of contributions from the film and the scanner was less than 1.7%. OD variations showed that EBT gray scale grew slower, however, reached higher growth values of 15% when compared with EBT2 gray scale which grew 12% after a long time (480 hours) post-irradiation. The EBT film using the red color channel showed the minimal growth where OD increased up to 3% within 3 days after irradiation, and took one week to stabilize.
Impulse oscillometry: a measure for airway obstruction.
Vink, Geraldine R; Arets, Hubertus G M; van der Laag, Johan; van der Ent, Cornelis K
2003-03-01
The impulse oscillometry system (IOS) was introduced as a new technique to assess airflow obstruction in patients who are not able to perform forced breathing maneuvers, e.g., subjects with cerebral palsy or severe mental retardation, and young children. This study evaluates the sensitivity and specificity of IOS parameters to quantify changes in airflow obstruction in comparison with forced expiratory volume in the first second (FEV(1)) and peak expiratory flow (PEF) measurements. Measurements of FEV(1), PEF, and resistance (R) and reactance (X) at frequencies of 5-35 Hz were performed in 19 children with asthma before, during, and after methacholine challenge and subsequent bronchodilatation. All parameters changed significantly during tests. Values of R5 and R10 correlated with FEV(1) (r = -0.71 and -0.73, respectively, P < 0.001), as did values of X5 and X10 (r = 0.52 and 0.57, respectively, P < 0.01). Changes in R preceded changes in PEF and FEV(1) during methacholine challenge. The area under the receiver operating characteristic (ROC) curve to predict a 15% fall in FEV(1) showed better sensitivity and specificity for R5 (area under the curve, 0.85) compared to PEF (0.79) or R10 (0.73). We conclude that IOS parameters can be easily used as an indirect measure of airflow obstruction. This might be helpful in patients who are not able to perform forced breathing maneuvers. In individual subjects, R values measured at 5 Hz showed to be superior to PEF measurements in the detection of a 15% fall in FEV(1). Copyright 2003 Wiley-Liss, Inc.
An imaging-based computational model for simulating angiogenesis and tumour oxygenation dynamics
NASA Astrophysics Data System (ADS)
Adhikarla, Vikram; Jeraj, Robert
2016-05-01
Tumour growth, angiogenesis and oxygenation vary substantially among tumours and significantly impact their treatment outcome. Imaging provides a unique means of investigating these tumour-specific characteristics. Here we propose a computational model to simulate tumour-specific oxygenation changes based on the molecular imaging data. Tumour oxygenation in the model is reflected by the perfused vessel density. Tumour growth depends on its doubling time (T d) and the imaged proliferation. Perfused vessel density recruitment rate depends on the perfused vessel density around the tumour (sMVDtissue) and the maximum VEGF concentration for complete vessel dysfunctionality (VEGFmax). The model parameters were benchmarked to reproduce the dynamics of tumour oxygenation over its entire lifecycle, which is the most challenging test. Tumour oxygenation dynamics were quantified using the peak pO2 (pO2peak) and the time to peak pO2 (t peak). Sensitivity of tumour oxygenation to model parameters was assessed by changing each parameter by 20%. t peak was found to be more sensitive to tumour cell line related doubling time (~30%) as compared to tissue vasculature density (~10%). On the other hand, pO2peak was found to be similarly influenced by the above tumour- and vasculature-associated parameters (~30-40%). Interestingly, both pO2peak and t peak were only marginally affected by VEGFmax (~5%). The development of a poorly oxygenated (hypoxic) core with tumour growth increased VEGF accumulation, thus disrupting the vessel perfusion as well as further increasing hypoxia with time. The model with its benchmarked parameters, is applied to hypoxia imaging data obtained using a [64Cu]Cu-ATSM PET scan of a mouse tumour and the temporal development of the vasculature and hypoxia maps are shown. The work underscores the importance of using tumour-specific input for analysing tumour evolution. An extended model incorporating therapeutic effects can serve as a powerful tool for analysing tumour response to anti-angiogenic therapies.
Juszczak, Grzegorz R; Lisowski, Paweł; Sliwa, Adam T; Swiergiel, Artur H
2008-10-20
In behavioral pharmacology, two problems are encountered when quantifying animal behavior: 1) reproducibility of the results across laboratories, especially in the case of manual scoring of animal behavior; 2) presence of different behavioral idiosyncrasies, common in genetically different animals, that mask or mimic the effects of the experimental treatments. This study aimed to develop an automated method enabling simultaneous assessment of the duration of immobility in mice and the depth of body submersion during swimming by means of computer assisted video analysis system (EthoVision from Noldus). We tested and compared parameters of immobility based either on the speed of an object (animal) movement or based on the percentage change in the object's area between the consecutive video frames. We also examined the effects of an erosion-dilation filtering procedure on the results obtained with both parameters of immobility. Finally, we proposed an automated method enabling assessment of depth of body submersion that reflects swimming performance. It was found that both parameters of immobility were sensitive to the effect of an antidepressant, desipramine, and that they yielded similar results when applied to mice that are good swimmers. The speed parameter was, however, more sensitive and more reliable because it depended less on random noise of the video image. Also, it was established that applying the erosion-dilation filtering procedure increased the reliability of both parameters of immobility. In case of mice that were poor swimmers, the assessed duration of immobility differed depending on a chosen parameter, thus resulting in the presence or lack of differences between two lines of mice that differed in swimming performance. These results substantiate the need for assessing swimming performance when the duration of immobility in the FST is compared in lines that differ in their swimming "styles". Testing swimming performance can also be important in the studies investigating the effects of swim stress on other behavioral or physiological parameters because poor swimming abilities displayed by some lines can increase severity of swim stress, masking the between-line differences or the main treatment effects.
NASA Astrophysics Data System (ADS)
Khorashadi Zadeh, Farkhondeh; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy
2017-04-01
Parameter estimation is a major concern in hydrological modeling, which may limit the use of complex simulators with a large number of parameters. To support the selection of parameters to include in or exclude from the calibration process, Global Sensitivity Analysis (GSA) is widely applied in modeling practices. Based on the results of GSA, the influential and the non-influential parameters are identified (i.e. parameters screening). Nevertheless, the choice of the screening threshold below which parameters are considered non-influential is a critical issue, which has recently received more attention in GSA literature. In theory, the sensitivity index of a non-influential parameter has a value of zero. However, since numerical approximations, rather than analytical solutions, are utilized in GSA methods to calculate the sensitivity indices, small but non-zero indices may be obtained for the indices of non-influential parameters. In order to assess the threshold that identifies non-influential parameters in GSA methods, we propose to calculate the sensitivity index of a "dummy parameter". This dummy parameter has no influence on the model output, but will have a non-zero sensitivity index, representing the error due to the numerical approximation. Hence, the parameters whose indices are above the sensitivity index of the dummy parameter can be classified as influential, whereas the parameters whose indices are below this index are within the range of the numerical error and should be considered as non-influential. To demonstrated the effectiveness of the proposed "dummy parameter approach", 26 parameters of a Soil and Water Assessment Tool (SWAT) model are selected to be analyzed and screened, using the variance-based Sobol' and moment-independent PAWN methods. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. Moreover, the calculation does not even require additional model evaluations for the Sobol' method. A formal statistical test validates these parameter screening results. Based on the dummy parameter screening, 11 model parameters are identified as influential. Therefore, it can be denoted that the "dummy parameter approach" can facilitate the parameter screening process and provide guidance for GSA users to define a screening-threshold, with only limited additional resources. Key words: Parameter screening, Global sensitivity analysis, Dummy parameter, Variance-based method, Moment-independent method
Model-based cross-correlation search for gravitational waves from Scorpius X-1
NASA Astrophysics Data System (ADS)
Whelan, John T.; Sundaresan, Santosh; Zhang, Yuanhao; Peiris, Prabath
2015-05-01
We consider the cross-correlation search for periodic gravitational waves and its potential application to the low-mass x-ray binary Sco X-1. This method coherently combines data not only from different detectors at the same time, but also data taken at different times from the same or different detectors. By adjusting the maximum allowed time offset between a pair of data segments to be coherently combined, one can tune the method to trade off sensitivity and computing costs. In particular, the detectable signal amplitude scales as the inverse fourth root of this coherence time. The improvement in amplitude sensitivity for a search with a maximum time offset of one hour, compared with a directed stochastic background search with 0.25-Hz-wide bins, is about a factor of 5.4. We show that a search of one year of data from the Advanced LIGO and Advanced Virgo detectors with a coherence time of one hour would be able to detect gravitational waves from Sco X-1 at the level predicted by torque balance over a range of signal frequencies from 30 to 300 Hz; if the coherence time could be increased to ten hours, the range would be 20 to 500 Hz. In addition, we consider several technical aspects of the cross-correlation method: We quantify the effects of spectral leakage and show that nearly rectangular windows still lead to the most sensitive search. We produce an explicit parameter-space metric for the cross-correlation search, in general, and as applied to a neutron star in a circular binary system. We consider the effects of using a signal template averaged over unknown amplitude parameters: The quantity to which the search is sensitive is a given function of the intrinsic signal amplitude and the inclination of the neutron-star rotation axis to the line of sight, and the peak of the expected detection statistic is systematically offset from the true signal parameters. Finally, we describe the potential loss of signal-to-noise ratio due to unmodeled effects such as signal phase acceleration within the Fourier transform time scale and gradual evolution of the spin frequency.
Comparative Sensitivity Analysis of Muscle Activation Dynamics
Günther, Michael; Götz, Thomas
2015-01-01
We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379
Water use at pulverized coal power plants with postcombustion carbon capture and storage.
Zhai, Haibo; Rubin, Edward S; Versteeg, Peter L
2011-03-15
Coal-fired power plants account for nearly 50% of U.S. electricity supply and about a third of U.S. emissions of CO(2), the major greenhouse gas (GHG) associated with global climate change. Thermal power plants also account for 39% of all freshwater withdrawals in the U.S. To reduce GHG emissions from coal-fired plants, postcombustion carbon capture and storage (CCS) systems are receiving considerable attention. Current commercial amine-based capture systems require water for cooling and other operations that add to power plant water requirements. This paper characterizes and quantifies water use at coal-burning power plants with and without CCS and investigates key parameters that influence water consumption. Analytical models are presented to quantify water use for major unit operations. Case study results show that, for power plants with conventional wet cooling towers, approximately 80% of total plant water withdrawals and 86% of plant water consumption is for cooling. The addition of an amine-based CCS system would approximately double the consumptive water use of the plant. Replacing wet towers with air-cooled condensers for dry cooling would reduce plant water use by about 80% (without CCS) to about 40% (with CCS). However, the cooling system capital cost would approximately triple, although costs are highly dependent on site-specific characteristics. The potential for water use reductions with CCS is explored via sensitivity analyses of plant efficiency and other key design parameters that affect water resource management for the electric power industry.
Optimizing financial effects of HIE: a multi-party linear programming approach.
Sridhar, Srikrishna; Brennan, Patricia Flatley; Wright, Stephen J; Robinson, Stephen M
2012-01-01
To describe an analytical framework for quantifying the societal savings and financial consequences of a health information exchange (HIE), and to demonstrate its use in designing pricing policies for sustainable HIEs. We developed a linear programming model to (1) quantify the financial worth of HIE information to each of its participating institutions and (2) evaluate three HIE pricing policies: fixed-rate annual, charge per visit, and charge per look-up. We considered three desired outcomes of HIE-related emergency care (modeled as parameters): preventing unrequired hospitalizations, reducing duplicate tests, and avoiding emergency department (ED) visits. We applied this framework to 4639 ED encounters over a 12-month period in three large EDs in Milwaukee, Wisconsin, using Medicare/Medicaid claims data, public reports of hospital admissions, published payer mix data, and use data from a not-for-profit regional HIE. For this HIE, data accesses produced net financial gains for all providers and payers. Gains, due to HIE, were more significant for providers with more health maintenance organizations patients. Reducing unrequired hospitalizations and avoiding repeat ED visits were responsible for more than 70% of the savings. The results showed that fixed annual subscriptions can sustain this HIE, while ensuring financial gains to all participants. Sensitivity analysis revealed that the results were robust to uncertainties in modeling parameters. Our specific HIE pricing recommendations depend on the unique characteristics of this study population. However, our main contribution is the modeling approach, which is broadly applicable to other populations.
Zhou, Bin; Zhao, Bin
2014-01-01
It is difficult to evaluate and compare interventions for reducing exposure to air pollutants, including polycyclic aromatic hydrocarbons (PAHs), a widely found air pollutant in both indoor and outdoor air. This study presents the first application of the Monte Carlo population exposure assessment model to quantify the effects of different intervention strategies on inhalation exposure to PAHs and the associated lung cancer risk. The method was applied to the population in Beijing, China, in the year 2006. Several intervention strategies were designed and studied, including atmospheric cleaning, smoking prohibition indoors, use of clean fuel for cooking, enhancing ventilation while cooking and use of indoor cleaners. Their performances were quantified by population attributable fraction (PAF) and potential impact fraction (PIF) of lung cancer risk, and the changes in indoor PAH concentrations and annual inhalation doses were also calculated and compared. The results showed that atmospheric cleaning and use of indoor cleaners were the two most effective interventions. The sensitivity analysis showed that several input parameters had major influence on the modeled PAH inhalation exposure and the rankings of different interventions. The ranking was reasonably robust for the remaining majority of parameters. The method itself can be extended to other pollutants and in different places. It enables the quantitative comparison of different intervention strategies and would benefit intervention design and relevant policy making. PMID:24416436
Modeling Real-Time Human-Automation Collaborative Scheduling of Unmanned Vehicles
2013-06-01
that they can only take into account those quantifiable variables, parameters, objectives, and constraints identified in the design stages that were... account those quantifiable variables, parameters, objectives, and constraints identified in the design stages that were deemed to be critical. Previous...increased training and operating costs (Haddal & Gertler, 2010) and challenges in meeting the ever-increasing demand for more UV operations (U.S. Air
Impacts of climate change on water quantity and quality in Rhineland-Palatinate/Germany
NASA Astrophysics Data System (ADS)
Casper, M. C.; Grigoryan, G. V.
2009-04-01
The Ministry of the Environment of Rhineland-Palatinate, Germany, launched an interdisciplinary research project dealing with "climate and land use change in Rhineland-Palatinate" (KlimLandRP). The aim of KlimLandRP is to specify adaptation strategies and to find current research gaps. The University of Trier/Germany undertakes the task of quantifying the impact of climate change on hydrological cycle and on water quality. In the first phase of the project (2008/2009) the models STOFFBILANZ and WaSiM-ETH are applied. WETTREG projections (2050/2100) and newly high resolution CCLM (2015-2024) projections for Rhineland-Palatinate are used to indicate the spectrum of climate change. Possible land use scenarios for agricultural regions are furthermore adopted. Using STOFFBILANZ it is possible to get approximate spatial information about present and future distribution of water, nitrate and phosphor balance in Rhineland-Palatinate and to identify sensitive regions. Based on achieved results, regions which are vulnerable to water economy are identified and adaptations proposed. With the application of WaSiM-ETH the impact of climate change on water balance of forest sites is quantified. The relation between climate parameters and tree growth indices is applied in forest management planning, particularly for forest site mapping. In the future, also the rainfall-runoff model LARSIM will be applied to quantify the impacts of climate change on the hydrological cycle of mesoscale catchment basins.
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Branch, Oliver; Attinger, Sabine; Thober, Stephan
2016-09-01
Land surface models incorporate a large number of process descriptions, containing a multitude of parameters. These parameters are typically read from tabulated input files. Some of these parameters might be fixed numbers in the computer code though, which hinder model agility during calibration. Here we identified 139 hard-coded parameters in the model code of the Noah land surface model with multiple process options (Noah-MP). We performed a Sobol' global sensitivity analysis of Noah-MP for a specific set of process options, which includes 42 out of the 71 standard parameters and 75 out of the 139 hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated at 12 catchments within the United States with very different hydrometeorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its applicable standard parameters (i.e., Sobol' indexes above 1%). The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for direct evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities because of their tight coupling via the water balance. A calibration of Noah-MP against either of these fluxes should therefore give comparable results. Moreover, these fluxes are sensitive to both plant and soil parameters. Calibrating, for example, only soil parameters hence limit the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
NASA Astrophysics Data System (ADS)
Arnold, R. T.; Troost, Christian; Berger, Thomas
2015-01-01
Irrigation with surface water enables Chilean agricultural producers to generate one of the country's most important economic exports. The Chilean water code established tradable water rights as a mechanism to allocate water amongst farmers and other water-use sectors. It remains contested whether this mechanism is effective and many authors have raised equity concerns regarding its impact on water users. For example, speculative hoarding of water rights in expectations of their increasing value has been described. This paper demonstrates how farmers can hoard water rights as a risk management strategy for variable water supply, for example, due to the cycles of El Niño or as consequence of climate change. While farmers with insufficient water rights can rely on unclaimed water during conditions of normal water availability, drought years overproportionally impact on their supply of irrigation water and thereby farm profitability. This study uses a simulation model that consists of a hydrological balance model component and a multiagent farm decision and production component. Both model components are parameterized with empirical data, while uncertain parameters are calibrated. The study demonstrates a thorough quantification of parameter uncertainty, using global sensitivity analysis and multiple behavioral parameter scenarios.
Modeling and measuring the visual detection of ecologically relevant motion by an Anolis lizard.
Pallus, Adam C; Fleishman, Leo J; Castonguay, Philip M
2010-01-01
Motion in the visual periphery of lizards, and other animals, often causes a shift of visual attention toward the moving object. This behavioral response must be more responsive to relevant motion (predators, prey, conspecifics) than to irrelevant motion (windblown vegetation). Early stages of visual motion detection rely on simple local circuits known as elementary motion detectors (EMDs). We presented a computer model consisting of a grid of correlation-type EMDs, with videos of natural motion patterns, including prey, predators and windblown vegetation. We systematically varied the model parameters and quantified the relative response to the different classes of motion. We carried out behavioral experiments with the lizard Anolis sagrei and determined that their visual response could be modeled with a grid of correlation-type EMDs with a spacing parameter of 0.3 degrees visual angle, and a time constant of 0.1 s. The model with these parameters gave substantially stronger responses to relevant motion patterns than to windblown vegetation under equivalent conditions. However, the model is sensitive to local contrast and viewer-object distance. Therefore, additional neural processing is probably required for the visual system to reliably distinguish relevant from irrelevant motion under a full range of natural conditions.
Generalized correlation integral vectors: A distance concept for chaotic dynamical systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haario, Heikki, E-mail: heikki.haario@lut.fi; Kalachev, Leonid, E-mail: KalachevL@mso.umt.edu; Hakkarainen, Janne
2015-06-15
Several concepts of fractal dimension have been developed to characterise properties of attractors of chaotic dynamical systems. Numerical approximations of them must be calculated by finite samples of simulated trajectories. In principle, the quantities should not depend on the choice of the trajectory, as long as it provides properly distributed samples of the underlying attractor. In practice, however, the trajectories are sensitive with respect to varying initial values, small changes of the model parameters, to the choice of a solver, numeric tolerances, etc. The purpose of this paper is to present a statistically sound approach to quantify this variability. Wemore » modify the concept of correlation integral to produce a vector that summarises the variability at all selected scales. The distribution of this stochastic vector can be estimated, and it provides a statistical distance concept between trajectories. Here, we demonstrate the use of the distance for the purpose of estimating model parameters of a chaotic dynamic model. The methodology is illustrated using computational examples for the Lorenz 63 and Lorenz 95 systems, together with a framework for Markov chain Monte Carlo sampling to produce posterior distributions of model parameters.« less
Validation of Storm Water Management Model Storm Control Measures Modules
NASA Astrophysics Data System (ADS)
Simon, M. A.; Platz, M. C.
2017-12-01
EPA's Storm Water Management Model (SWMM) is a computational code heavily relied upon by industry for the simulation of wastewater and stormwater infrastructure performance. Many municipalities are relying on SWMM results to design multi-billion-dollar, multi-decade infrastructure upgrades. Since the 1970's, EPA and others have developed five major releases, the most recent ones containing storm control measures modules for green infrastructure. The main objective of this study was to quantify the accuracy with which SWMM v5.1.10 simulates the hydrologic activity of previously monitored low impact developments. Model performance was evaluated with a mathematical comparison of outflow hydrographs and total outflow volumes, using empirical data and a multi-event, multi-objective calibration method. The calibration methodology utilized PEST++ Version 3, a parameter estimation tool, which aided in the selection of unmeasured hydrologic parameters. From the validation study and sensitivity analysis, several model improvements were identified to advance SWMM LID Module performance for permeable pavements, infiltration units and green roofs, and these were performed and reported herein. Overall, it was determined that SWMM can successfully simulate low impact development controls given accurate model confirmation, parameter measurement, and model calibration.
NASA Astrophysics Data System (ADS)
Christoffersen, B. O.; Xu, C.; Koven, C.; Fisher, R.; Knox, R. G.; Kueppers, L. M.; Chambers, J. Q.; McDowell, N.
2017-12-01
Recent syntheses of variation in woody plant traits have emphasized how hydraulic traits - those related to the acquisition, transport and retention of water across roots, stems and leaves - are coordinated along a limited set of dimensions or sequence of responses (Reich 2014, Bartlett et al. 2016). However, in many hydraulic trait-trait relationships, there is considerable residual variation, despite the fact that many bivariate relationships are statistically significant. In other instances, such as the relationship between root-stem-leaf vulnerability to embolism, data are so limited that testing the trait coordination hypothesis is not yet possible. The impacts on plant hydraulic function of competing hypotheses regarding trait coordination (or the lack thereof) and residual trait variation have not yet been comprehensively tested and thus remain unknown. We addressed this knowledge gap with a parameter sensitivity analysis using a plant hydraulics model in which all parameters are biologically-interpretable and measurable plant hydraulic traits, as embedded within a size- and demographically-structured ecosystem model, the `Functionally Assembled Terrestrial Ecosystem Simulator' (FATES). We focused on tropical forests, where co-existing species have been observed to possess large variability in their hydraulic traits. Assembling 10 distinct datasets of hydraulic traits of stomata, leaves, stems, and roots, we determined the best-fit theoretical distribution for each trait and quantified interspecific (between-species) trait-trait coordination in tropical forests as a rank correlation matrix. We imputed missing correlations with values based on competing hypotheses of trait coordination, such as coordinated shifts in embolism vulnerability from roots to shoots (the hydraulic fuse hypothesis). Based on the Fourier Amplitude Sensitivity Test and our correlation matrix, we generated thousands of parameter sets for an ensemble of hydraulics model simulations at a tropical forest site in central Amazonia. We explore the sensitivity of simulated leaf water potential and stem sap flux in the context of hypotheses of trait-trait coordination and their associated uncertainties.
Osman, Alaa G M; Mekkawy, Imam A; Verreth, Johan; Wuertz, Sven; Kloas, Werner; Kirschbaum, Frank
2008-12-01
Increasing lead contamination in Egyptian ecosystems and high lead concentrations in food items have raised concern for human health and stimulated studies on monitoring ecotoxicological impact of lead-caused genotoxicity. In this work, the alkaline comet assay was modified for monitoring DNA strand breakage in sensitive early life stages of the African catfish Clarias gariepinus. Following exposure to 100, 300, and 500 microg/L lead nitrate, DNA strand breakage was quantified in embryos at 30, 48, 96, 144, and 168 h post-fertilization (PFS). For quantitative analysis, four commonly used parameters (tail % DNA, %TDNA; head % DNA, %HDNA; tail length, TL; tail moment, TM) were analyzed in 96 nuclei (in triplicates) at each sampling point. The parameter %TDNA revealed highest resolution and lowest variation. A strong correlation between lead concentration, time of exposure, and DNA strand breakage was observed. Here, genotoxicity detected by comet assay preceded the manifested malformations assessed with conventional histology. Qualitative evaluation was carried out using five categories are as follows: undamaged (%TDNA < or = 10%), low damaged (10% < %TDNA < or = 25%), median damaged (25 < %TDNA < or = 50%), highly damaged (50 < %TDNA < or = 75%), and extremely damaged (%TDNA > 75%) nuclei confirming a dose and time-dependent shift towards increased frequencies of highly and extremely damaged nuclei. A protective capacity provided by a hardened chorion is a an interesting finding in this study as DNA damage in the prehatching stages 30 h-PFS and 48 h-PFS was low in all treatments (qualitative and quantitative analyses). These results clearly show that the comet assay is a sensitive tool for the detection of genotoxicity in vulnerable early life stages of the African catfish and is a method more sensitive than histological parameters for monitoring genotoxic effects. 2008 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Viswanath, Satish; Toth, Robert; Rusu, Mirabela; Sperling, Dan; Lepor, Herbert; Futterer, Jurgen; Madabhushi, Anant
2013-03-01
Laser interstitial thermal therapy (LITT) has recently shown great promise as a treatment strategy for localized, focal, low-grade, organ-confined prostate cancer (CaP). Additionally, LITT is compatible with multi-parametric magnetic resonance imaging (MP-MRI) which in turn enables (1) high resolution, accurate localization of ablation zones on in vivo MP-MRI prior to LITT, and (2) real-time monitoring of temperature changes in vivo via MR thermometry during LITT. In spite of rapidly increasing interest in the use of LITT for treating low grade, focal CaP, very little is known about treatment-related changes following LITT. There is thus a clear need for studying post-LITT changes via MP-MRI and consequently to attempt to (1) quantitatively identify MP-MRI markers predictive of favorable treatment response and longer term patient outcome, and (2) identify which MP-MRI markers are most sensitive to post-LITT changes in the prostate. In this work, we present the first attempt at examining focal treatment-related changes on a per-voxel basis (high resolution) via quantitative evaluation of MR parameters pre- and post-LITT. A retrospective cohort of MP-MRI data comprising both pre- and post- LITT T2-weighted (T2w) and diffusion-weighted (DWI) acquisitions was considered, where DWI MRI yielded an Apparent Diffusion Co-efficient (ADC) map. A spatially constrained affine registration scheme was implemented to first bring T2w and ADC images into alignment within each of the pre- and post-LITT acquisitions, following which the pre- and post-LITT acquisitions were aligned. Pre- and post-LITT MR parameters (T2w intensity, ADC value) were then standardized to a uniform scale (to correct for intensity drift) and then quantified via the raw intensity values as well as via texture features derived from T2w MRI. In order to quantify imaging changes as a result of LITT, absolute differences were calculated between the normalized pre- and post-LITT MRI parameters. Quantitatively combining the ADC and T2w MRI parameters enabled construction of an integrated MP-MRI difference map that was highly indicative of changes specific to the LITT ablation zone. Preliminary quantitative comparison of the changes in different MR parameters indicated that T2w texture may be highly sensitive as well as specific in identifying changes within the ablation zone pre- and post-LITT. Visual evaluation of the differences in T2w texture features pre- and post-LITT also appeared to provide an indication of LITT-related effects such as edema. Our preliminary results thus indicate great potential for non-invasive MP-MRI imaging markers for determining focal treatment related changes, and hence long- and short-term patient outcome.
Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet
2010-10-24
Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary context to determine how modeling results should be interpreted in biological systems.
Modeling and Bayesian parameter estimation for shape memory alloy bending actuators
NASA Astrophysics Data System (ADS)
Crews, John H.; Smith, Ralph C.
2012-04-01
In this paper, we employ a homogenized energy model (HEM) for shape memory alloy (SMA) bending actuators. Additionally, we utilize a Bayesian method for quantifying parameter uncertainty. The system consists of a SMA wire attached to a flexible beam. As the actuator is heated, the beam bends, providing endoscopic motion. The model parameters are fit to experimental data using an ordinary least-squares approach. The uncertainty in the fit model parameters is then quantified using Markov Chain Monte Carlo (MCMC) methods. The MCMC algorithm provides bounds on the parameters, which will ultimately be used in robust control algorithms. One purpose of the paper is to test the feasibility of the Random Walk Metropolis algorithm, the MCMC method used here.
Li, Yi Zhe; Zhang, Ting Long; Liu, Qiu Yu; Li, Ying
2018-01-01
The ecological process models are powerful tools for studying terrestrial ecosystem water and carbon cycle at present. However, there are many parameters for these models, and weather the reasonable values of these parameters were taken, have important impact on the models simulation results. In the past, the sensitivity and the optimization of model parameters were analyzed and discussed in many researches. But the temporal and spatial heterogeneity of the optimal parameters is less concerned. In this paper, the BIOME-BGC model was used as an example. In the evergreen broad-leaved forest, deciduous broad-leaved forest and C3 grassland, the sensitive parameters of the model were selected by constructing the sensitivity judgment index with two experimental sites selected under each vegetation type. The objective function was constructed by using the simulated annealing algorithm combined with the flux data to obtain the monthly optimal values of the sensitive parameters at each site. Then we constructed the temporal heterogeneity judgment index, the spatial heterogeneity judgment index and the temporal and spatial heterogeneity judgment index to quantitatively analyze the temporal and spatial heterogeneity of the optimal values of the model sensitive parameters. The results showed that the sensitivity of BIOME-BGC model parameters was different under different vegetation types, but the selected sensitive parameters were mostly consistent. The optimal values of the sensitive parameters of BIOME-BGC model mostly presented time-space heterogeneity to different degrees which varied with vegetation types. The sensitive parameters related to vegetation physiology and ecology had relatively little temporal and spatial heterogeneity while those related to environment and phenology had generally larger temporal and spatial heterogeneity. In addition, the temporal heterogeneity of the optimal values of the model sensitive parameters showed a significant linear correlation with the spatial heterogeneity under the three vegetation types. According to the temporal and spatial heterogeneity of the optimal values, the parameters of the BIOME-BGC model could be classified in order to adopt different parameter strategies in practical application. The conclusion could help to deeply understand the parameters and the optimal values of the ecological process models, and provide a way or reference for obtaining the reasonable values of parameters in models application.
NASA Astrophysics Data System (ADS)
Hameed, M.; Demirel, M. C.; Moradkhani, H.
2015-12-01
Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.
X-Ray Spectro-Polarimetry with Photoelectric Polarimeters
NASA Technical Reports Server (NTRS)
Strohmayer, T. E.
2017-01-01
We derive a generalization of forward fitting for X-ray spectroscopy to include linear polarization of X-ray sources, appropriate for the anticipated next generation of space-based photoelectric polarimeters. We show that the inclusion of polarization sensitivity requires joint fitting to three observed spectra, one for each of the Stokes parameters, I(E), U(E), and Q(E). The equations for StokesI (E) (the total intensity spectrum) are identical to the familiar case with no polarization sensitivity, and for which the model-predicted spectrum is obtained by a convolution of the source spectrum, F (E), with the familiar energy response function,(E) R(E,E), where (E) and R(E,E) are the effective area and energy redistribution matrix, respectively. In addition to the energy spectrum, the two new relations for U(E) and Q(E) include the source polarization fraction and position angle versus energy, a(E), and 0(E), respectively, and the model-predicted spectra for these relations are obtained by a convolution with the modulated energy response function, (E)(E) R(E,E), where(E) is the energy-dependent modulation fraction that quantifies a polarimeters angular response to 100 polarized radiation. We present results of simulations with response parameters appropriate for the proposed PRAXyS Small Explorer observatory to illustrate the procedures and methods, and we discuss some aspects of photoelectric polarimeters with relevance to understanding their calibration and operation.
Study on fibre laser machining quality of plain woven CFRP laminates
NASA Astrophysics Data System (ADS)
Li, Maojun; Li, Shuo; Yang, Xujing; Zhang, Yi; Liang, Zhichao
2018-03-01
Laser cutting is suitable for large-scale and high-efficiency production with relatively high cutting speed, while machining of CFRP composite using lasers is challenging with severe thermal damage due to different material properties and sensitivity to heat. In this paper, surface morphology of cutting plain woven carbon fibre-reinforced plastics (CFRP) by fibre laser and the influence of cutting parameters on machined quality were investigated. A full factorial experimental design was employed involving three variable factors, which included laser pulse frequency at three levels together with laser power and cutting speed at two levels. Heat-affected zone (HAZ), kerf depth and kerf angle were quantified to understand the interactions with cutting parameters. Observations of machined surface were analysed relating to various damages using optical microscope and scanning electron microscopy (SEM), which included HAZ, matrix recession, fibre protruding, striations, fibre-end swelling, collapses, cavities and delamination. Based on ANOVA analysis, it was found that both cutting speed and laser power were significant factors for HAZ and kerf depth, while laser power was the only significant factor for kerf angle. Besides, HAZ and the kerf depth showed similar sensitivity to the pulse energy and energy per unit length, which was opposite for kerf angle. This paper presented the feasibility and experimental results of cutting CFRP laminates using fibre laser, which is possibly the efficient and high-quality process to promote the development of CFRPs.
Pradeep, A R; Martande, Santosh S; Singh, Sonender Pal; Suke, Deepak Kumar; Raju, Arjun P; Naik, Savitha B
2014-04-01
The aim of the present study was to evaluate the levels and correlation of human S100A12 and high-sensitivity C-reactive protein (hs-CRP) in gingival crevicular fluid (GCF) and serum in chronic periodontitis (CP) subjects with and without type 2 diabetes mellitus (DM). A total of 44 subjects were divided into three groups: group 1 had 10 periodontally healthy subjects, group 2 consisted of 17 CP subjects and group 3 had 17 type 2 DM subjects with CP. GCF and serum levels of human S100A12 and hs-CRP were quantified using enzyme-linked immunosorbent assay and immunoturbidimetric analysis, respectively. The clinical outcomes evaluated were gingival index, probing depth and clinical attachment level and the correlations of the two inflammatory mediators with clinical parameters were evaluated. Both human S100A12 and hs-CRP levels increased from group 1 to group 2 to group 3. The GCF and serum values of both these inflammatory mediators correlated positively with each other and with the periodontal parameters evaluated (p < 0.05). Human S100A12 and hs-CRP can be considered as possible GCF and serum markers of inflammatory activity in CP and DM.
Perry, Joe N; Devos, Yann; Arpaia, Salvatore; Bartsch, Detlef; Ehlert, Christina; Gathmann, Achim; Hails, Rosemary S; Hendriksen, Niels B; Kiss, Jozsef; Messéan, Antoine; Mestdagh, Sylvie; Neemann, Gerd; Nuti, Marco; Sweet, Jeremy B; Tebbe, Christoph C
2012-01-01
In farmland biodiversity, a potential risk to the larvae of non-target Lepidoptera from genetically modified (GM) Bt-maize expressing insecticidal Cry1 proteins is the ingestion of harmful amounts of pollen deposited on their host plants. A previous mathematical model of exposure quantified this risk for Cry1Ab protein. We extend this model to quantify the risk for sensitive species exposed to pollen containing Cry1F protein from maize event 1507 and to provide recommendations for management to mitigate this risk. A 14-parameter mathematical model integrating small- and large-scale exposure was used to estimate the larval mortality of hypothetical species with a range of sensitivities, and under a range of simulated mitigation measures consisting of non-Bt maize strips of different widths placed around the field edge. The greatest source of variability in estimated mortality was species sensitivity. Before allowance for effects of large-scale exposure, with moderate within-crop host-plant density and with no mitigation, estimated mortality locally was <10% for species of average sensitivity. For the worst-case extreme sensitivity considered, estimated mortality locally was 99·6% with no mitigation, although this estimate was reduced to below 40% with mitigation of 24-m-wide strips of non-Bt maize. For highly sensitive species, a 12-m-wide strip reduced estimated local mortality under 1·5%, when within-crop host-plant density was zero. Allowance for large-scale exposure effects would reduce these estimates of local mortality by a highly variable amount, but typically of the order of 50-fold. Mitigation efficacy depended critically on assumed within-crop host-plant density; if this could be assumed negligible, then the estimated effect of mitigation would reduce local mortality below 1% even for very highly sensitive species. Synthesis and applications. Mitigation measures of risks of Bt-maize to sensitive larvae of non-target lepidopteran species can be effective, but depend on host-plant densities which are in turn affected by weed-management regimes. We discuss the relevance for management of maize events where cry1F is combined (stacked) with a herbicide-tolerance trait. This exemplifies how interactions between biota may occur when different traits are stacked irrespective of interactions between the proteins themselves and highlights the importance of accounting for crop management in the assessment of the ecological impact of GM plants. PMID:22496596
Experiments and simulation for 6061-T6 aluminum alloy resistance spot welded lap joints
NASA Astrophysics Data System (ADS)
Florea, Radu Stefanel
This comprehensive study is the first to quantify the fatigue performance, failure loads, and microstructure of resistance spot welding (RSW) in 6061-T6 aluminum (Al) alloy according to welding parameters and process sensitivity. The extensive experimental, theoretical and simulated analyses will provide a framework to optimize the welding of lightweight structures for more fuel-efficient automotive and military applications. The research was executed in four primary components. The first section involved using electron back scatter diffraction (EBSD) scanning, tensile testing, laser beam profilometry (LBP) measurements, and optical microscopy(OM) images to experimentally investigate failure loads and deformation of the Al-alloy resistance spot welded joints. Three welding conditions, as well as nugget and microstructure characteristics, were quantified according to predefined process parameters. Quasi-static tensile tests were used to characterize the failure loads in specimens based upon these same process parameters. Profilometer results showed that increasing the applied welding current deepened the weld imprints. The EBSD scans revealed the strong dependency between the grain sizes and orientation function on the process parameters. For the second section, the fatigue behavior of the RSW'ed joints was experimentally investigated. The process optimization included consideration of the forces, currents, and times for both the main weld and post-heating. Load control cyclic tests were conducted on single weld lap-shear joint coupons to characterize the fatigue behavior in spot welded specimens. Results demonstrate that welding parameters do indeed significantly affect the microstructure and fatigue performance for these welds. The third section comprised residual strains of resistance spot welded joints measured in three different directions, denoted as in-plane longitudinal, in-plane transversal, and normal, and captured on the fusion zone, heat affected zone and base metal of the joints. Neutron diffraction results showed residual stresses in the weld are approximately 40% lower than the yield strength of the parent material, with maximum variation occurring in the vertical position of the specimen because of the orientation of electrode clamping forces that produce a non-uniform solidification pattern. In the final section a theoretical continuum modeling framework for 6061-T6 aluminum resistance spot welded joints is presented.
Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation
NASA Astrophysics Data System (ADS)
Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter
2015-04-01
Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.
Nestorov, I A; Aarons, L J; Rowland, M
1997-08-01
Sensitivity analysis studies the effects of the inherent variability and uncertainty in model parameters on the model outputs and may be a useful tool at all stages of the pharmacokinetic modeling process. The present study examined the sensitivity of a whole-body physiologically based pharmacokinetic (PBPK) model for the distribution kinetics of nine 5-n-alkyl-5-ethyl barbituric acids in arterial blood and 14 tissues (lung, liver, kidney, stomach, pancreas, spleen, gut, muscle, adipose, skin, bone, heart, brain, testes) after i.v. bolus administration to rats. The aims were to obtain new insights into the model used, to rank the model parameters involved according to their impact on the model outputs and to study the changes in the sensitivity induced by the increase in the lipophilicity of the homologues on ascending the series. Two approaches for sensitivity analysis have been implemented. The first, based on the Matrix Perturbation Theory, uses a sensitivity index defined as the normalized sensitivity of the 2-norm of the model compartmental matrix to perturbations in its entries. The second approach uses the traditional definition of the normalized sensitivity function as the relative change in a model state (a tissue concentration) corresponding to a relative change in a model parameter. Autosensitivity has been defined as sensitivity of a state to any of its parameters; cross-sensitivity as the sensitivity of a state to any other states' parameters. Using the two approaches, the sensitivity of representative tissue concentrations (lung, liver, kidney, stomach, gut, adipose, heart, and brain) to the following model parameters: tissue-to-unbound plasma partition coefficients, tissue blood flows, unbound renal and intrinsic hepatic clearance, permeability surface area product of the brain, have been analyzed. Both the tissues and the parameters were ranked according to their sensitivity and impact. The following general conclusions were drawn: (i) the overall sensitivity of the system to all parameters involved is small due to the weak connectivity of the system structure; (ii) the time course of both the auto- and cross-sensitivity functions for all tissues depends on the dynamics of the tissues themselves, e.g., the higher the perfusion of a tissue, the higher are both its cross-sensitivity to other tissues' parameters and the cross-sensitivities of other tissues to its parameters; and (iii) with a few exceptions, there is not a marked influence of the lipophilicity of the homologues on either the pattern or the values of the sensitivity functions. The estimates of the sensitivity and the subsequent tissue and parameter rankings may be extended to other drugs, sharing the same common structure of the whole body PBPK model, and having similar model parameters. Results show also that the computationally simple Matrix Perturbation Analysis should be used only when an initial idea about the sensitivity of a system is required. If comprehensive information regarding the sensitivity is needed, the numerically expensive Direct Sensitivity Analysis should be used.
Identification and quantification of pathogenic helminth eggs using a digital image system.
Jiménez, B; Maya, C; Velásquez, G; Torner, F; Arambula, F; Barrios, J A; Velasco, M
2016-07-01
A system was developed to identify and quantify up to seven species of helminth eggs (Ascaris lumbricoides -fertile and unfertile eggs-, Trichuris trichiura, Toxocara canis, Taenia saginata, Hymenolepis nana, Hymenolepis diminuta, and Schistosoma mansoni) in wastewater using different image processing tools and pattern recognition algorithms. The system was developed in three stages. Version one was used to explore the viability of the concept of identifying helminth eggs through an image processing system, while versions 2 and 3 were used to improve its efficiency. The system development was based on the analysis of different properties of helminth eggs in order to discriminate them from other objects in samples processed using the conventional United States Environmental Protection Agency (US EPA) technique to quantify helminth eggs. The system was tested, in its three stages, considering two parameters: specificity (capacity to discriminate between species of helminth eggs and other objects) and sensitivity (capacity to correctly classify and identify the different species of helminth eggs). The final version showed a specificity of 99% while the sensitivity varied between 80 and 90%, depending on the total suspended solids content of the wastewater samples. To achieve such values in samples with total suspended solids (TSS) above 150 mg/L, it is recommended to dilute the concentrated sediment just before taking the images under the microscope. The system allows the helminth eggs most commonly found in wastewater to be reliably and uniformly detected and quantified. In addition, it provides the total number of eggs as well as the individual number by species, and for Ascaris lumbricoides it differentiates whether or not the egg is fertile. The system only requires basically trained technicians to prepare the samples, as for visual identification there is no need for highly trained personnel. The time required to analyze each image is less than a minute. This system could be used in central analytical laboratories providing a remote analysis service. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Quantifying hypoxia in human cancers using static PET imaging.
Taylor, Edward; Yeung, Ivan; Keller, Harald; Wouters, Bradley G; Milosevic, Michael; Hedley, David W; Jaffray, David A
2016-11-21
Compared to FDG, the signal of 18 F-labelled hypoxia-sensitive tracers in tumours is low. This means that in addition to the presence of hypoxic cells, transport properties contribute significantly to the uptake signal in static PET images. This sensitivity to transport must be minimized in order for static PET to provide a reliable standard for hypoxia quantification. A dynamic compartmental model based on a reaction-diffusion formalism was developed to interpret tracer pharmacokinetics and applied to static images of FAZA in twenty patients with pancreatic cancer. We use our model to identify tumour properties-well-perfused without substantial necrosis or partitioning-for which static PET images can reliably quantify hypoxia. Normalizing the measured activity in a tumour voxel by the value in blood leads to a reduction in the sensitivity to variations in 'inter-corporal' transport properties-blood volume and clearance rate-as well as imaging study protocols. Normalization thus enhances the correlation between static PET images and the FAZA binding rate K 3 , a quantity which quantifies hypoxia in a biologically significant way. The ratio of FAZA uptake in spinal muscle and blood can vary substantially across patients due to long muscle equilibration times. Normalized static PET images of hypoxia-sensitive tracers can reliably quantify hypoxia for homogeneously well-perfused tumours with minimal tissue partitioning. The ideal normalizing reference tissue is blood, either drawn from the patient before PET scanning or imaged using PET. If blood is not available, uniform, homogeneously well-perfused muscle can be used. For tumours that are not homogeneously well-perfused or for which partitioning is significant, only an analysis of dynamic PET scans can reliably quantify hypoxia.
Quantifying hypoxia in human cancers using static PET imaging
NASA Astrophysics Data System (ADS)
Taylor, Edward; Yeung, Ivan; Keller, Harald; Wouters, Bradley G.; Milosevic, Michael; Hedley, David W.; Jaffray, David A.
2016-11-01
Compared to FDG, the signal of 18F-labelled hypoxia-sensitive tracers in tumours is low. This means that in addition to the presence of hypoxic cells, transport properties contribute significantly to the uptake signal in static PET images. This sensitivity to transport must be minimized in order for static PET to provide a reliable standard for hypoxia quantification. A dynamic compartmental model based on a reaction-diffusion formalism was developed to interpret tracer pharmacokinetics and applied to static images of FAZA in twenty patients with pancreatic cancer. We use our model to identify tumour properties—well-perfused without substantial necrosis or partitioning—for which static PET images can reliably quantify hypoxia. Normalizing the measured activity in a tumour voxel by the value in blood leads to a reduction in the sensitivity to variations in ‘inter-corporal’ transport properties—blood volume and clearance rate—as well as imaging study protocols. Normalization thus enhances the correlation between static PET images and the FAZA binding rate K 3, a quantity which quantifies hypoxia in a biologically significant way. The ratio of FAZA uptake in spinal muscle and blood can vary substantially across patients due to long muscle equilibration times. Normalized static PET images of hypoxia-sensitive tracers can reliably quantify hypoxia for homogeneously well-perfused tumours with minimal tissue partitioning. The ideal normalizing reference tissue is blood, either drawn from the patient before PET scanning or imaged using PET. If blood is not available, uniform, homogeneously well-perfused muscle can be used. For tumours that are not homogeneously well-perfused or for which partitioning is significant, only an analysis of dynamic PET scans can reliably quantify hypoxia.
Waveform inversion for orthorhombic anisotropy with P waves: feasibility and resolution
NASA Astrophysics Data System (ADS)
Kazei, Vladimir; Alkhalifah, Tariq
2018-05-01
Various parametrizations have been suggested to simplify inversions of first arrivals, or P waves, in orthorhombic anisotropic media, but the number and type of retrievable parameters have not been decisively determined. We show that only six parameters can be retrieved from the dynamic linearized inversion of P waves. These parameters are different from the six parameters needed to describe the kinematics of P waves. Reflection-based radiation patterns from the P-P scattered waves are remapped into the spectral domain to allow for our resolution analysis based on the effective angle of illumination concept. Singular value decomposition of the spectral sensitivities from various azimuths, offset coverage scenarios and data bandwidths allows us to quantify the resolution of different parametrizations, taking into account the signal-to-noise ratio in a given experiment. According to our singular value analysis, when the primary goal of inversion is determining the velocity of the P waves, gradually adding anisotropy of lower orders (isotropic, vertically transversally isotropic and orthorhombic) in hierarchical parametrization is the best choice. Hierarchical parametrization reduces the trade-off between the parameters and makes gradual introduction of lower anisotropy orders straightforward. When all the anisotropic parameters affecting P-wave propagation need to be retrieved simultaneously, the classic parametrization of orthorhombic medium with elastic stiffness matrix coefficients and density is a better choice for inversion. We provide estimates of the number and set of parameters that can be retrieved from surface seismic data in different acquisition scenarios. To set up an inversion process, the singular values determine the number of parameters that can be inverted and the resolution matrices from the parametrizations can be used to ascertain the set of parameters that can be resolved.
Design and development of pH-responsive HSPC:C12H25-PAA chimeric liposomes.
Naziris, Nikolaos; Pippa, Natassa; Meristoudi, Anastasia; Pispas, Stergios; Demetzos, Costas
2017-06-01
The application of stimuli-responsive medical practices has emerged, in which pH-sensitive liposomes figure prominently. This study investigates the impact of the incorporation of different amounts of pH-sensitive polymer, C 12 H 25 -PAA (poly(acrylic acid) with a hydrophobic end group) in l-α-phosphatidylcholine, hydrogenated (Soy) (HSPC) phospholipidic bilayers, with respect to biomimicry and functionality. PAA is a poly(carboxylic acid) molecule, classified as a pH-sensitive polymer, whose pH-sensitivity is attributed to its regulative -COOH groups, which are protonated under acidic pH (pKa ∼4.2). Our concern was to fully characterize, in a biophysical and thermodynamical manner, the mixed nanoassemblies arising from the combination of the two biomaterials. At first, we quantified the physicochemical characteristics and physical stability of the prepared chimeric nanosystems. Then, we studied their thermotropic behavior, through measurement of thermodynamical parameters, using Differential Scanning Calorimetry (DSC). Finally, the loading and release of indomethacin (IND) were evaluated, as well as the physicochemical properties and stability of the nanocarriers incorporating it. As expected, thermodynamical findings are in line with physicochemical results and also explain the loading and release profiles of IND. The novelty of this investigation is the utilization of these pH-sensitive chimeric advanced Drug Delivery nano Systems (aDDnSs) in targeted drug delivery which relies entirely on the biophysics and thermodynamics between such designs and the physiological membranes and environment of living organisms.
NASA Astrophysics Data System (ADS)
Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue
2018-06-01
Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.
Parameter Uncertainty on AGCM-simulated Tropical Cyclones
NASA Astrophysics Data System (ADS)
He, F.
2015-12-01
This work studies the parameter uncertainty on tropical cyclone (TC) simulations in Atmospheric General Circulation Models (AGCMs) using the Reed-Jablonowski TC test case, which is illustrated in Community Atmosphere Model (CAM). It examines the impact from 24 parameters across the physical parameterization schemes that represent the convection, turbulence, precipitation and cloud processes in AGCMs. The one-at-a-time (OAT) sensitivity analysis method first quantifies their relative importance on TC simulations and identifies the key parameters to the six different TC characteristics: intensity, precipitation, longwave cloud radiative forcing (LWCF), shortwave cloud radiative forcing (SWCF), cloud liquid water path (LWP) and ice water path (IWP). Then, 8 physical parameters are chosen and perturbed using the Latin-Hypercube Sampling (LHS) method. The comparison between OAT ensemble run and LHS ensemble run shows that the simulated TC intensity is mainly affected by the parcel fractional mass entrainment rate in Zhang-McFarlane (ZM) deep convection scheme. The nonlinear interactive effect among different physical parameters is negligible on simulated TC intensity. In contrast, this nonlinear interactive effect plays a significant role in other simulated tropical cyclone characteristics (precipitation, LWCF, SWCF, LWP and IWP) and greatly enlarge their simulated uncertainties. The statistical emulator Extended Multivariate Adaptive Regression Splines (EMARS) is applied to characterize the response functions for nonlinear effect. Last, we find that the intensity uncertainty caused by physical parameters is in a degree comparable to uncertainty caused by model structure (e.g. grid) and initial conditions (e.g. sea surface temperature, atmospheric moisture). These findings suggest the importance of using the perturbed physics ensemble (PPE) method to revisit tropical cyclone prediction under climate change scenario.
Optimizing human activity patterns using global sensitivity analysis.
Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M
2014-12-01
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.
Optimizing human activity patterns using global sensitivity analysis
Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.
2014-01-01
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080
Optimizing human activity patterns using global sensitivity analysis
Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; ...
2013-12-10
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less
Phenological sensitivity to climate across taxa and trophic levels.
Thackeray, Stephen J; Henrys, Peter A; Hemming, Deborah; Bell, James R; Botham, Marc S; Burthe, Sarah; Helaouet, Pierre; Johns, David G; Jones, Ian D; Leech, David I; Mackay, Eleanor B; Massimino, Dario; Atkinson, Sian; Bacon, Philip J; Brereton, Tom M; Carvalho, Laurence; Clutton-Brock, Tim H; Duck, Callan; Edwards, Martin; Elliott, J Malcolm; Hall, Stephen J G; Harrington, Richard; Pearce-Higgins, James W; Høye, Toke T; Kruuk, Loeske E B; Pemberton, Josephine M; Sparks, Tim H; Thompson, Paul M; White, Ian; Winfield, Ian J; Wanless, Sarah
2016-07-14
Differences in phenological responses to climate change among species can desynchronise ecological interactions and thereby threaten ecosystem function. To assess these threats, we must quantify the relative impact of climate change on species at different trophic levels. Here, we apply a Climate Sensitivity Profile approach to 10,003 terrestrial and aquatic phenological data sets, spatially matched to temperature and precipitation data, to quantify variation in climate sensitivity. The direction, magnitude and timing of climate sensitivity varied markedly among organisms within taxonomic and trophic groups. Despite this variability, we detected systematic variation in the direction and magnitude of phenological climate sensitivity. Secondary consumers showed consistently lower climate sensitivity than other groups. We used mid-century climate change projections to estimate that the timing of phenological events could change more for primary consumers than for species in other trophic levels (6.2 versus 2.5-2.9 days earlier on average), with substantial taxonomic variation (1.1-14.8 days earlier on average).
Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan
2016-04-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities towards standard and hard-coded parameters in Noah-MP because of their tight coupling via the water balance. It should therefore be comparable to calibrate Noah-MP either against latent heat observations or against river runoff data. Latent heat and total runoff are sensitive to both, plant and soil parameters. Calibrating only a parameter sub-set of only soil parameters, for example, thus limits the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
Liwarska-Bizukojc, Ewa; Biernacki, Rafal
2010-10-01
In order to simulate biological wastewater treatment processes, data concerning wastewater and sludge composition, process kinetics and stoichiometry are required. Selection of the most sensitive parameters is an important step of model calibration. The aim of this work is to verify the predictability of the activated sludge model, which is implemented in BioWin software, and select its most influential kinetic and stoichiometric parameters with the help of sensitivity analysis approach. Two different measures of sensitivity are applied: the normalised sensitivity coefficient (S(i,j)) and the mean square sensitivity measure (delta(j)(msqr)). It occurs that 17 kinetic and stoichiometric parameters of the BioWin activated sludge (AS) model can be regarded as influential on the basis of S(i,j) calculations. Half of the influential parameters are associated with growth and decay of phosphorus accumulating organisms (PAOs). The identification of the set of the most sensitive parameters should support the users of this model and initiate the elaboration of determination procedures for the parameters, for which it has not been done yet. Copyright 2010 Elsevier Ltd. All rights reserved.
An approach to measure parameter sensitivity in watershed ...
Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier for the Little Miami River (LMR) and Las Vegas Wash (LVW) watersheds were used for detail sensitivity analyses. To compare the relative sensitivities of the hydrologic parameters of these two models, we used Normalized Root Mean Square Error (NRMSE). By combining the NRMSE index with the flow duration curve analysis, we derived an approach to measure parameter sensitivities under different flow regimes. Results show that the parameters related to groundwater are highly sensitive in the LMR watershed, whereas the LVW watershed is primarily sensitive to near surface and impervious parameters. The high and medium flows are more impacted by most of the parameters. Low flow regime was highly sensitive to groundwater related parameters. Moreover, our approach is found to be useful in facilitating model development and calibration. This journal article describes hydrological modeling of climate change and land use changes on stream hydrology, and elucidates the importance of hydrological model construction in generating valid modeling results.
Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra; ...
2017-11-20
The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less
Analysis of the sensitivity properties of a model of vector-borne bubonic plague.
Buzby, Megan; Neckels, David; Antolin, Michael F; Estep, Donald
2008-09-06
Model sensitivity is a key to evaluation of mathematical models in ecology and evolution, especially in complex models with numerous parameters. In this paper, we use some recently developed methods for sensitivity analysis to study the parameter sensitivity of a model of vector-borne bubonic plague in a rodent population proposed by Keeling & Gilligan. The new sensitivity tools are based on a variational analysis involving the adjoint equation. The new approach provides a relatively inexpensive way to obtain derivative information about model output with respect to parameters. We use this approach to determine the sensitivity of a quantity of interest (the force of infection from rats and their fleas to humans) to various model parameters, determine a region over which linearization at a specific parameter reference point is valid, develop a global picture of the output surface, and search for maxima and minima in a given region in the parameter space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra
The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less
NASA Astrophysics Data System (ADS)
Tan, Ivy; Storelvmo, Trude
2015-04-01
Substantial improvements have been made to the cloud microphysical schemes used in the latest generation of global climate models (GCMs), however, an outstanding weakness of these schemes lies in the arbitrariness of their tuning parameters, which are also notoriously fraught with uncertainties. Despite the growing effort in improving the cloud microphysical schemes in GCMs, most of this effort has neglected to focus on improving the ability of GCMs to accurately simulate the present-day global distribution of thermodynamic phase partitioning in mixed-phase clouds. Liquid droplets and ice crystals not only influence the Earth's radiative budget and hence climate sensitivity via their contrasting optical properties, but also through the effects of their lifetimes in the atmosphere. The current study employs NCAR's CAM5.1, and uses observations of cloud phase obtained by NASA's CALIOP lidar over a 79-month period (November 2007 to June 2014) guide the accurate simulation of the global distribution of mixed-phase clouds in 20∘ latitudinal bands at the -10∘ C, -20∘C and -30∘C isotherms, by adjusting six relevant cloud microphysical tuning parameters in the CAM5.1 via Quasi-Monte Carlo sampling. Among the parameters include those that control the Wegener-Bergeron-Findeisen (WBF) timescale for the conversion of supercooled liquid droplets to ice and snow in mixed-phase clouds, the fraction of ice nuclei that nucleate ice in the atmosphere, ice crystal sedimentation speed, and wet scavenging in stratiform and convective clouds. Using a Generalized Linear Model as a variance-based sensitivity analysis, the relative contributions of each of the six parameters are quantified to gain a better understanding of the importance of their individual and two-way interaction effects on the liquid to ice proportion in mixed-phase clouds. Thus, the methodology implemented in the current study aims to search for the combination of cloud microphysical parameters in a GCM that produce the most accurate reproduction of observations of cloud thermodynamic phase, while simultaneously assessing the weaknesses of the parameterizations in the model. We find that the simulated proportion of liquid to ice in mixed-phase clouds is dominated by the fraction of active ice nuclei in the atmosphere and the WBF timescale. In a follow-up to this study, we apply these results to a fully-coupled GCM, CESM, and find that cloud thermodynamic phase has profound ramifications for the uncertainty associated with climate sensitivity estimates.
Permutation entropy analysis of financial time series based on Hill's diversity number
NASA Astrophysics Data System (ADS)
Zhang, Yali; Shang, Pengjian
2017-12-01
In this paper the permutation entropy based on Hill's diversity number (Nn,r) is introduced as a new way to assess the complexity of a complex dynamical system such as stock market. We test the performance of this method with simulated data. Results show that Nn,r with appropriate parameters is more sensitive to the change of system and describes the trends of complex systems clearly. In addition, we research the stock closing price series from different data that consist of six indices: three US stock indices and three Chinese stock indices during different periods, Nn,r can quantify the changes of complexity for stock market data. Moreover, we get richer information from Nn,r, and obtain some properties about the differences between the US and Chinese stock indices.
Ordinal patterns in epileptic brains: Analysis of intracranial EEG and simultaneous EEG-fMRI
NASA Astrophysics Data System (ADS)
Rummel, C.; Abela, E.; Hauf, M.; Wiest, R.; Schindler, K.
2013-06-01
Epileptic seizures are associated with high behavioral stereotypy of the patients. In the EEG of epilepsy patients characteristic signal patterns can be found during and between seizures. Here we use ordinal patterns to analyze EEGs of epilepsy patients and quantify the degree of signal determinism. Besides relative signal redundancy and the fraction of forbidden patterns we introduce the fraction of under-represented patterns as a new measure. Using the logistic map, parameter scans are performed to explore the sensitivity of the measures to signal determinism. Thereafter, application is made to two types of EEGs recorded in two epilepsy patients. Intracranial EEG shows pronounced determinism peaks during seizures. Finally, we demonstrate that ordinal patterns may be useful for improving analysis of non-invasive simultaneous EEG-fMRI.
Concrete thawing studied by single-point ramped imaging.
Prado, P J; Balcom, B J; Beyea, S D; Armstrong, R L; Bremner, T W
1997-12-01
A series of two-dimensional images of proton distribution in a hardened concrete sample has been obtained during the thawing process (from -50 degrees C up to 11 degrees C). The SPRITE sequence is optimal for this study given the characteristic short relaxation times of water in this porous media (T2* < 200 micros and T1 < 3.6 ms). The relaxation parameters of the sample were determined in order to optimize the time efficiency of the sequence, permitting a 4-scan 64 x 64 acquisition in under 3 min. The image acquisition is fast on the time scale of the temperature evolution of the specimen. The frozen water distribution is quantified through a position based study of the image contrast. A multiple point acquisition method is presented and the signal sensitivity improvement is discussed.
The effect of speaking style on a locus equation characterization of stop place of articulation.
Sussman, H M; Dalston, E; Gumbert, S
1998-01-01
Locus equations were employed to assess the phonetic stability and distinctiveness of stop place categories in reduced speech. Twenty-two speakers produced stop consonant + vowel utterances in citation and spontaneous speech. Coarticulatory increases in hypoarticulated speech were documented only for /dV/ and [gV] productions in front vowel contexts. Coarticulatory extents for /bV/ and [gV] in back vowel contexts remained stable across style changes. Discriminant analyses showed equivalent levels of correct classification across speaking styles. CV reduction was quantified by use of Euclidean distances separating stop place categories. Despite sensitivity of locus equation parameters to articulatory differences encountered in informal speech, stop place categories still maintained a clear separability when plotted in a higher-order slope x y-intercept acoustic space.
Optimal and secure measurement protocols for quantum sensor networks
NASA Astrophysics Data System (ADS)
Eldredge, Zachary; Foss-Feig, Michael; Gross, Jonathan A.; Rolston, S. L.; Gorshkov, Alexey V.
2018-04-01
Studies of quantum metrology have shown that the use of many-body entangled states can lead to an enhancement in sensitivity when compared with unentangled states. In this paper, we quantify the metrological advantage of entanglement in a setting where the measured quantity is a linear function of parameters individually coupled to each qubit. We first generalize the Heisenberg limit to the measurement of nonlocal observables in a quantum network, deriving a bound based on the multiparameter quantum Fisher information. We then propose measurement protocols that can make use of Greenberger-Horne-Zeilinger (GHZ) states or spin-squeezed states and show that in the case of GHZ states the protocol is optimal, i.e., it saturates our bound. We also identify nanoscale magnetic resonance imaging as a promising setting for this technology.
Remote sensing of vegetation canopy photosynthetic and stomatal conductance efficiencies
NASA Technical Reports Server (NTRS)
Myneni, R. B.; Ganapol, B. D.; Asrar, G.
1992-01-01
The problem of remote sensing the canopy photosynthetic and stomatal conductance efficiencies is investigated with the aid of one- and three-dimensional radiative transfer methods coupled to a semi-empirical mechanistic model of leaf photosynthesis and stomatal conductance. Desertlike vegetation is modeled as clumps of leaves randomly distributed on a bright dry soil with partial ground cover. Normalized difference vegetation index (NDVI), canopy photosynthetic (Ep), and stomatal efficiencies (Es) are calculated for various geometrical, optical, and illumination conditions. The contribution of various radiative fluxes to estimates of Ep is evaluated and the magnitude of errors in bulk canopy formulation of problem parameters are quantified. The nature and sensitivity of the relationship between Ep and Es to NDVI is investigated, and an algorithm is proposed for use in operational remote sensing.
Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations
NASA Astrophysics Data System (ADS)
Bang, Youngsuk
Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel hybrid ROM algorithms which can be readily integrated into existing methods and offer higher computational efficiency and defendable accuracy of the reduced models. For example, the snapshots ROM algorithm is hybridized with the range finding algorithm to render reduction in the state space, e.g. the flux in reactor calculations. In another implementation, the perturbation theory used to calculate first order derivatives of responses with respect to parameters is hybridized with a forward sensitivity analysis approach to render reduction in the parameter space. Reduction at the state and parameter spaces can be combined to render further reduction at the interface between different physics codes in a multi-physics model with the accuracy quantified in a similar manner to the single physics case. Although the proposed algorithms are generic in nature, we focus here on radiation transport models used in support of the design and analysis of nuclear reactor cores. In particular, we focus on replacing the traditional assembly calculations by ROM models to facilitate the generation of homogenized cross-sections for downstream core calculations. The implication is that assembly calculations could be done instantaneously therefore precluding the need for the expensive evaluation of the few-group cross-sections for all possible core conditions. Given the generic natures of the algorithms, we make an effort to introduce the material in a general form to allow non-nuclear engineers to benefit from this work.
Mamou, Jonathan; Wa, Christianne A; Yee, Kenneth M P; Silverman, Ronald H; Ketterling, Jeffrey A; Sadun, Alfredo A; Sebag, J
2015-01-22
Clinical evaluation of floaters lacks quantitative assessment of vitreous structure. This study used quantitative ultrasound (QUS) to measure vitreous opacities. Since floaters reduce contrast sensitivity (CS) and quality of life (Visual Function Questionnaire [VFQ]), it is hypothesized that QUS will correlate with CS and VFQ in patients with floaters. Twenty-two eyes (22 subjects; age = 57 ± 19 years) with floaters were evaluated with Freiburg acuity contrast testing (FrACT; %Weber) and VFQ. Ultrasonography used a customized probe (15-MHz center frequency, 20-mm focal length, 7-mm aperture) with longitudinal and transverse scans taken in primary gaze and a horizontal longitudinal scan through premacular vitreous in temporal gaze. Each scan set had 100 frames of log-compressed envelope data. Within each frame, two regions of interest (ROIs) were analyzed (whole-central and posterior vitreous) to yield three parameters (energy, E; mean amplitude, M; and percentage of vitreous filled by echodensities, P50) averaged over the entire 100-frame dataset. Statistical analyses evaluated E, M, and P50 correlations with CS and VFQ. Contrast sensitivity ranged from 1.19%W (normal) to 5.59%W. All QUS parameters in two scan positions within the whole-central ROI correlated with CS (R > 0.67, P < 0.001). P50 in the nasal longitudinal position had R = 0.867 (P < 0.001). Correlations with VFQ ranged from R = 0.52 (P < 0.013) to R = 0.65 (P < 0.001). Quantitative ultrasound provides quantitative measures of vitreous echodensity that correlate with CS and VFQ, providing objective assessment of vitreous structure underlying the functional disturbances induced by floaters, useful to quantify vitreous disease severity and the response to therapy. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.
Mamou, Jonathan; Wa, Christianne A.; Yee, Kenneth M. P.; Silverman, Ronald H.; Ketterling, Jeffrey A.; Sadun, Alfredo A.; Sebag, J.
2015-01-01
Purpose. Clinical evaluation of floaters lacks quantitative assessment of vitreous structure. This study used quantitative ultrasound (QUS) to measure vitreous opacities. Since floaters reduce contrast sensitivity (CS) and quality of life (Visual Function Questionnaire [VFQ]), it is hypothesized that QUS will correlate with CS and VFQ in patients with floaters. Methods. Twenty-two eyes (22 subjects; age = 57 ± 19 years) with floaters were evaluated with Freiburg acuity contrast testing (FrACT; %Weber) and VFQ. Ultrasonography used a customized probe (15-MHz center frequency, 20-mm focal length, 7-mm aperture) with longitudinal and transverse scans taken in primary gaze and a horizontal longitudinal scan through premacular vitreous in temporal gaze. Each scan set had 100 frames of log-compressed envelope data. Within each frame, two regions of interest (ROIs) were analyzed (whole-central and posterior vitreous) to yield three parameters (energy, E; mean amplitude, M; and percentage of vitreous filled by echodensities, P50) averaged over the entire 100-frame dataset. Statistical analyses evaluated E, M, and P50 correlations with CS and VFQ. Results. Contrast sensitivity ranged from 1.19%W (normal) to 5.59%W. All QUS parameters in two scan positions within the whole-central ROI correlated with CS (R > 0.67, P < 0.001). P50 in the nasal longitudinal position had R = 0.867 (P < 0.001). Correlations with VFQ ranged from R = 0.52 (P < 0.013) to R = 0.65 (P < 0.001). Conclusions. Quantitative ultrasound provides quantitative measures of vitreous echodensity that correlate with CS and VFQ, providing objective assessment of vitreous structure underlying the functional disturbances induced by floaters, useful to quantify vitreous disease severity and the response to therapy. PMID:25613948
Tighilet, Brahim; Péricat, David; Frelat, Alais; Cazals, Yves; Rastoldo, Guillaume; Boyer, Florent; Dumas, Olivier
2017-01-01
Vestibular disorders, by inducing significant posturo-locomotor and cognitive disorders, can significantly impair the most basic tasks of everyday life. Their precise diagnosis is essential to implement appropriate therapeutic countermeasures. Monitoring their evolution is also very important to validate or, on the contrary, to adapt the undertaken therapeutic actions. To date, the diagnosis methods of posturo-locomotor impairments are restricted to examinations that most often lack sensitivity and precision. In the present work we studied the alterations of the dynamic weight distribution in a rodent model of sudden and complete unilateral vestibular loss. We used a system of force sensors connected to a data analysis system to quantify in real time and in an automated way the weight bearing of the animal on the ground. We show here that sudden, unilateral, complete and permanent loss of the vestibular inputs causes a severe alteration of the dynamic ground weight distribution of vestibulo lesioned rodents. Characteristics of alterations in the dynamic weight distribution vary over time and follow the sequence of appearance and disappearance of the various symptoms that compose the vestibular syndrome. This study reveals for the first time that dynamic weight bearing is a very sensitive parameter for evaluating posturo-locomotor function impairment. Associated with more classical vestibular examinations, this paradigm can considerably enrich the methods for assessing and monitoring vestibular disorders. Systematic application of this type of evaluation to the dizzy or unstable patient could improve the detection of vestibular deficits and allow predicting better their impact on posture and walk. Thus it could also allow a better follow-up of the therapeutic approaches for rehabilitating gait and balance. PMID:29112981
Tighilet, Brahim; Péricat, David; Frelat, Alais; Cazals, Yves; Rastoldo, Guillaume; Boyer, Florent; Dumas, Olivier; Chabbert, Christian
2017-01-01
Vestibular disorders, by inducing significant posturo-locomotor and cognitive disorders, can significantly impair the most basic tasks of everyday life. Their precise diagnosis is essential to implement appropriate therapeutic countermeasures. Monitoring their evolution is also very important to validate or, on the contrary, to adapt the undertaken therapeutic actions. To date, the diagnosis methods of posturo-locomotor impairments are restricted to examinations that most often lack sensitivity and precision. In the present work we studied the alterations of the dynamic weight distribution in a rodent model of sudden and complete unilateral vestibular loss. We used a system of force sensors connected to a data analysis system to quantify in real time and in an automated way the weight bearing of the animal on the ground. We show here that sudden, unilateral, complete and permanent loss of the vestibular inputs causes a severe alteration of the dynamic ground weight distribution of vestibulo lesioned rodents. Characteristics of alterations in the dynamic weight distribution vary over time and follow the sequence of appearance and disappearance of the various symptoms that compose the vestibular syndrome. This study reveals for the first time that dynamic weight bearing is a very sensitive parameter for evaluating posturo-locomotor function impairment. Associated with more classical vestibular examinations, this paradigm can considerably enrich the methods for assessing and monitoring vestibular disorders. Systematic application of this type of evaluation to the dizzy or unstable patient could improve the detection of vestibular deficits and allow predicting better their impact on posture and walk. Thus it could also allow a better follow-up of the therapeutic approaches for rehabilitating gait and balance.
Zhu, Ying; Price, Oliver R; Tao, Shu; Jones, Kevin C; Sweetman, Andy J
2014-08-01
We present a new multimedia chemical fate model (SESAMe) which was developed to assess chemical fate and behaviour across China. We apply the model to quantify the influence of environmental parameters on chemical overall persistence (POV) and long-range transport potential (LRTP) in China, which has extreme diversity in environmental conditions. Sobol sensitivity analysis was used to identify the relative importance of input parameters. Physicochemical properties were identified as more influential than environmental parameters on model output. Interactive effects of environmental parameters on POV and LRTP occur mainly in combination with chemical properties. Hypothetical chemicals and emission data were used to model POV and LRTP for neutral and acidic chemicals with different KOW/DOW, vapour pressure and pKa under different precipitation, wind speed, temperature and soil organic carbon contents (fOC). Generally for POV, precipitation was more influential than the other environmental parameters, whilst temperature and wind speed did not contribute significantly to POV variation; for LRTP, wind speed was more influential than the other environmental parameters, whilst the effects of other environmental parameters relied on specific chemical properties. fOC had a slight effect on POV and LRTP, and higher fOC always increased POV and decreased LRTP. Example case studies were performed on real test chemicals using SESAMe to explore the spatial variability of model output and how environmental properties affect POV and LRTP. Dibenzofuran released to multiple media had higher POV in northwest of Xinjiang, part of Gansu, northeast of Inner Mongolia, Heilongjiang and Jilin. Benzo[a]pyrene released to the air had higher LRTP in south Xinjiang and west Inner Mongolia, whilst acenaphthene had higher LRTP in Tibet and west Inner Mongolia. TCS released into water had higher LRTP in Yellow River and Yangtze River catchments. The initial case studies demonstrated that SESAMe performed well on comparing POV and LRTP of chemicals in different regions across China in order to potentially identify the most sensitive regions. This model should not only be used to estimate POV and LRTP for screening and risk assessments of chemicals, but could potentially be used to help design chemical monitoring programmes across China in the future. Copyright © 2014 Elsevier Ltd. All rights reserved.
Uncertainty quantification and risk analyses of CO2 leakage in heterogeneous geological formations
NASA Astrophysics Data System (ADS)
Hou, Z.; Murray, C. J.; Rockhold, M. L.
2012-12-01
A stochastic sensitivity analysis framework is adopted to evaluate the impact of spatial heterogeneity in permeability on CO2 leakage risk. The leakage is defined as the total mass of CO2 moving into the overburden through the caprock-overburden interface, in both gaseous and liquid (dissolved) phases. The entropy-based framework has the ability to quantify the uncertainty associated with the input parameters in the form of prior pdfs (probability density functions). Effective sampling of the prior pdfs enables us to fully explore the parameter space and systematically evaluate the individual and combined effects of the parameters of interest on CO2 leakage risk. The parameters that are considered in the study include: mean, variance, and horizontal to vertical spatial anisotropy ratio for caprock permeability, and those same parameters for reservoir permeability. Given the sampled spatial variogram parameters, multiple realizations of permeability fields were generated using GSLIB subroutines. For each permeability field, a numerical simulator, STOMP, (in the water-salt-CO2-energy operational mode) is used to simulate the CO2 migration within the reservoir and caprock up to 50 years after injection. Due to intensive computational demand, we run both a scalable version simulator eSTOMP and serial STOMP on various supercomputers. We then perform statistical analyses and summarize the relationships between the parameters of interest (mean/variance/anisotropy ratio of caprock and reservoir permeability) and CO2 leakage ratio. We also present the effects of those parameters on CO2 plume radius and reservoir injectivity. The statistical analysis provides a reduced order model that can be used to estimate the impact of heterogeneity on caprock leakage.
NASA Astrophysics Data System (ADS)
Cianci, Davio; Ross-Lonergan, Mark; Karagiorgi, Georgia; Furmanski, Andy
2017-01-01
While current and last generation neutrino experiments have vastly improved our knowledge of the three neutrino oscillation paradigm, certain anomalous experimental signatures such as the LSND and MiniBooNE anomalies have arisen which have consistently evaded a standard three neutrino explanation. One possible scenario to explain these anomalies is the addition of one or more, mostly sterile, light neutrino mass states, leading to observable oscillations associated to new frequencies at relatively short baselines. This talk will describe how Fermilab's Short Baseline Neutrino (SBN) program will be uniquely poised to test the existence of light sterile neutrinos in scenarios including one, two or three such new states. To quantify SBN's sensitivity reach, we compare the experiment's sensitivity to current, globally-allowed parameters for sterile neutrino oscillations. We also explore the possibility of including antineutrino beam running in the SBN run plan and study its impact on the potential physics reach, in particular from the perspective of new CP-violating phases which appear in these extended oscillation scenarios.
Absorption spectroscopy at the limb of small transiting exoplanets
NASA Astrophysics Data System (ADS)
Ehrenreich, D.; Lecavelier Des Etangs, A.
2005-12-01
Planetary transits are a tremendous tool to probe into exoplanet atmospheres using the light from their parent stars (from 0.2 μm to ˜1 μm). The detection of atmospheric components in an extra-solar giant planet was performed using the Hubble Space Telescope (HST) with a sensitivity reaching ˜10-4 in relative absorption depth over ˜1 Å-wide features (Charbonneau et al., 2002). The next step is the detection and the characterization of smaller, possibly Earth-like worlds, which will require a sensitivity of ˜10-6. Fortunately, ˜0.1 μm-wide absorption bands of particular interest for small exoplanets do exist in this spectral domain. We developed a model to quantify the detectability of a variety of Earth-size planets harboring different kind of atmospheres. Key parameters are the density of the planet and the thickness of the atmosphere. We also evaluate in consequence the number of potential targets for a future space mission, and also find that K stars are best candidates. See Ehrenreich et al. (2005) for a complete description.
Simulation of gas diffusion and sorption in nanoceramic semiconductors
NASA Astrophysics Data System (ADS)
Skouras, E. D.; Burganos, V. N.; Payatakes, A. C.
1999-05-01
Gas diffusion and sorption in nanoceramic semiconductors are studied using atomistic simulation techniques and numerical results are presented for a variety of sorbate-sorbent systems. SnO2, BaTiO3, CuO, and MgO substrates are built on the computer using lattice constants and atomic parameters that have been either measured or computed by ab initio methods. The Universal force field is employed here for the description of both intramolecular and nonbonded interactions for various gas sorbates, including CH4, CO, CO2, and O2, pure and in binary mixtures. Mean residence times are determined by molecular dynamics computations, whereas the Henry constant and the isosteric heat of adsorption are estimated by a Monte Carlo technique. The effects of surface hydroxylation on the diffusion and sorption characteristics are quantified and discussed in view of their significance in practical gas sensing applications. The importance of fast diffusion on the response time of the sensitive layer and of the sorption efficiency on the overall sensitivity as well as the potential synergy of the two phenomena are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, Andrew J.; McDonald, Benjamin S.; Smith, Leon E.
The methods currently used by the International Atomic Energy Agency to account for nuclear materials at fuel fabrication facilities are time consuming and require in-field chemistry and operation by experts. Spectral X-ray radiography, along with advanced inverse algorithms, is an alternative inspection that could be completed noninvasively, without any in-field chemistry, with inspections of tens of seconds. The proposed inspection system and algorithms are presented here. The inverse algorithm uses total variation regularization and adaptive regularization parameter selection with the unbiased predictive risk estimator. Performance of the system is quantified with simulated X-ray inspection data and sensitivity of the outputmore » is tested against various inspection system instabilities. Material quantification from a fully-characterized inspection system is shown to be very accurate, with biases on nuclear material estimations of < 0.02%. It is shown that the results are sensitive to variations in the fuel powder sample density and detector pixel gain, which increase biases to 1%. Options to mitigate these inaccuracies are discussed.« less
Nakano, Yosuke; Konya, Yutaka; Taniguchi, Moyu; Fukusaki, Eiichiro
2017-01-01
d-Amino acids have recently attracted much attention in various research fields including medical, clinical and food industry due to their important biological functions that differ from l-amino acid. Most chiral amino acid separation techniques require complicated derivatization procedures in order to achieve the desirable chromatographic behavior and detectability. Thus, the aim of this research is to develop a highly sensitive analytical method for the enantioseparation of chiral amino acids without any derivatization process using liquid chromatography-tandem mass spectrometry (LC-MS/MS). By optimizing MS/MS parameters, we established a quantification method that allowed the simultaneous analysis of 18 d-amino acids with high sensitivity and reproducibility. Additionally, we applied the method to food sample (vinegar) for the validation, and successfully quantified trace levels of d-amino acids in samples. These results demonstrated the applicability and feasibility of the LC-MS/MS method as a novel, effective tool for d-amino acid measurement in various biological samples. Copyright © 2016 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Gooseff, M.N.; Bencala, K.E.; Scott, D.T.; Runkel, R.L.; McKnight, Diane M.
2005-01-01
The transient storage model (TSM) has been widely used in studies of stream solute transport and fate, with an increasing emphasis on reactive solute transport. In this study we perform sensitivity analyses of a conservative TSM and two different reactive solute transport models (RSTM), one that includes first-order decay in the stream and the storage zone, and a second that considers sorption of a reactive solute on streambed sediments. Two previously analyzed data sets are examined with a focus on the reliability of these RSTMs in characterizing stream and storage zone solute reactions. Sensitivities of simulations to parameters within and among reaches, parameter coefficients of variation, and correlation coefficients are computed and analyzed. Our results indicate that (1) simulated values have the greatest sensitivity to parameters within the same reach, (2) simulated values are also sensitive to parameters in reaches immediately upstream and downstream (inter-reach sensitivity), (3) simulated values have decreasing sensitivity to parameters in reaches farther downstream, and (4) in-stream reactive solute data provide adequate data to resolve effective storage zone reaction parameters, given the model formulations. Simulations of reactive solutes are shown to be equally sensitive to transport parameters and effective reaction parameters of the model, evidence of the control of physical transport on reactive solute dynamics. Similar to conservative transport analysis, reactive solute simulations appear to be most sensitive to data collected during the rising and falling limb of the concentration breakthrough curve. ?? 2005 Elsevier Ltd. All rights reserved.
Dao, Thanh H; Hoang, Khanh Q
2008-08-01
Extracellular phosphohydrolases mediate the dephosphorylation of phosphoesters and influence bioavailability and loss of agricultural P to the environment to pose risks of impairment of sensitive aquatic ecosystems. Induction and culture of five strains of Aspergillus were conducted to develop a source of high-affinity and robust phosphohydrolases for detecting environmental P and quantifying bioactive P pools in heterogeneous environmental specimens. Enzyme stability and activity against organic P in poultry litter were evaluated in 71 samples collected across poultry producing regions of Arkansas, Maryland, and Oklahoma of the US Differences existed in strains' adaptability to fermentation medium as they showed a wide range of phytate-degrading activity. Phosphohydrolases from Aspergillus ficuum had highest activity when the strain was cultured on a primarily chemical medium, compared to Aspergillus oryzae which preferred a wheat bran-based organic medium. Kinetics parameters of A. ficuum enzymes (K(m)=210 microM; V(max) of 407 nmol s(-1)) indicated phytic acid-degrading potential equivalent to that of commercial preparations. Purified A. ficuum phosphohydrolases effectively quantified litter bioactive P pools, showing that organic P occurred at an average of 54 (+/-14)% of total P, compared to inorganic phosphates, which averaged 41 (+/-12)%. Litter management and land application options must consider the high water-extractable and organic P concentrations and the biological availability of the organic enzyme-labile P pool. Robustness of A. ficuum enzymes and simplicity of the in situ ligand-based enzyme assay may thus increase routine assessment of litter bioactive P composition to sense for on-farm accumulation of such environmentally-sensitive P forms.
Kinematic sensitivity of robot manipulators
NASA Technical Reports Server (NTRS)
Vuskovic, Marko I.
1989-01-01
Kinematic sensitivity vectors and matrices for open-loop, n degrees-of-freedom manipulators are derived. First-order sensitivity vectors are defined as partial derivatives of the manipulator's position and orientation with respect to its geometrical parameters. The four-parameter kinematic model is considered, as well as the five-parameter model in case of nominally parallel joint axes. Sensitivity vectors are expressed in terms of coordinate axes of manipulator frames. Second-order sensitivity vectors, the partial derivatives of first-order sensitivity vectors, are also considered. It is shown that second-order sensitivity vectors can be expressed as vector products of the first-order sensitivity vectors.
NASA Astrophysics Data System (ADS)
Chang, Ailian; Sun, HongGuang; Zheng, Chunmiao; Lu, Bingqing; Lu, Chengpeng; Ma, Rui; Zhang, Yong
2018-07-01
Fractional-derivative models have been developed recently to interpret various hydrologic dynamics, such as dissolved contaminant transport in groundwater. However, they have not been applied to quantify other fluid dynamics, such as gas transport through complex geological media. This study reviewed previous gas transport experiments conducted in laboratory columns and real-world oil-gas reservoirs and found that gas dynamics exhibit typical sub-diffusive behavior characterized by heavy late-time tailing in the gas breakthrough curves (BTCs), which cannot be effectively captured by classical transport models. Numerical tests and field applications of the time fractional convection-diffusion equation (fCDE) have shown that the fCDE model can capture the observed gas BTCs including their apparent positive skewness. Sensitivity analysis further revealed that the three parameters used in the fCDE model, including the time index, the convection velocity, and the diffusion coefficient, play different roles in interpreting the delayed gas transport dynamics. In addition, the model comparison and analysis showed that the time fCDE model is efficient in application. Therefore, the time fractional-derivative models can be conveniently extended to quantify gas transport through natural geological media such as complex oil-gas reservoirs.
Development of probabilistic emission inventories of air toxics for Jacksonville, Florida, USA.
Zhao, Yuchao; Frey, H Christopher
2004-11-01
Probabilistic emission inventories were developed for 1,3-butadiene, mercury (Hg), arsenic (As), benzene, formaldehyde, and lead for Jacksonville, FL. To quantify inter-unit variability in empirical emission factor data, the Maximum Likelihood Estimation (MLE) method or the Method of Matching Moments was used to fit parametric distributions. For data sets that contain nondetected measurements, a method based upon MLE was used for parameter estimation. To quantify the uncertainty in urban air toxic emission factors, parametric bootstrap simulation and empirical bootstrap simulation were applied to uncensored and censored data, respectively. The probabilistic emission inventories were developed based on the product of the uncertainties in the emission factors and in the activity factors. The uncertainties in the urban air toxics emission inventories range from as small as -25 to +30% for Hg to as large as -83 to +243% for As. The key sources of uncertainty in the emission inventory for each toxic are identified based upon sensitivity analysis. Typically, uncertainty in the inventory of a given pollutant can be attributed primarily to a small number of source categories. Priorities for improving the inventories and for refining the probabilistic analysis are discussed.
PCEMCAN - Probabilistic Ceramic Matrix Composites Analyzer: User's Guide, Version 1.0
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Mital, Subodh K.; Murthy, Pappu L. N.
1998-01-01
PCEMCAN (Probabalistic CEramic Matrix Composites ANalyzer) is an integrated computer code developed at NASA Lewis Research Center that simulates uncertainties associated with the constituent properties, manufacturing process, and geometric parameters of fiber reinforced ceramic matrix composites and quantifies their random thermomechanical behavior. The PCEMCAN code can perform the deterministic as well as probabilistic analyses to predict thermomechanical properties. This User's guide details the step-by-step procedure to create input file and update/modify the material properties database required to run PCEMCAN computer code. An overview of the geometric conventions, micromechanical unit cell, nonlinear constitutive relationship and probabilistic simulation methodology is also provided in the manual. Fast probability integration as well as Monte-Carlo simulation methods are available for the uncertainty simulation. Various options available in the code to simulate probabilistic material properties and quantify sensitivity of the primitive random variables have been described. The description of deterministic as well as probabilistic results have been described using demonstration problems. For detailed theoretical description of deterministic and probabilistic analyses, the user is referred to the companion documents "Computational Simulation of Continuous Fiber-Reinforced Ceramic Matrix Composite Behavior," NASA TP-3602, 1996 and "Probabilistic Micromechanics and Macromechanics for Ceramic Matrix Composites", NASA TM 4766, June 1997.
Viswanathan, Sekarbabu; Verma, P R P; Ganesan, Muniyandithevar; Manivannan, Jeganathan
2017-07-15
Omega-3 fatty acids are clinically useful and the two marine omega-3 fatty acids eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) are prevalent in fish and fish oils. Omega-3 fatty acid formulations should undergo a rigorous regulatory step in order to obtain United States Food and Drug Administration (USFDA) approval as prescription drug. In connection with that, despite quantifying EPA and DHA fatty acids, there is a need for quantifying the level of ethyl esters of them in biological samples. In this study, we make use of reverse phase high performance liquid chromatography coupled with mass spectrometry (RP-HPLC-MS)technique for the method development. Here, we have developed a novel multiple reaction monitoring method along with optimized parameters for quantification of EPA and DHA as ethyl esters. Additionally, we attempted to validate the bio-analytical method by conducting the sensitivity, selectivity, precision accuracy batch, carryover test and matrix stability experiments. Furthermore, we also implemented our validated method for evaluation of pharmacokinetics of omega fatty acid ethyl ester formulations. Copyright © 2017 Elsevier B.V. All rights reserved.
A Bayesian Inferential Approach to Quantify the Transmission Intensity of Disease Outbreak
Kadi, Adiveppa S.; Avaradi, Shivakumari R.
2015-01-01
Background. Emergence of infectious diseases like influenza pandemic (H1N1) 2009 has become great concern, which posed new challenges to the health authorities worldwide. To control these diseases various studies have been developed in the field of mathematical modelling, which is useful tool for understanding the epidemiological dynamics and their dependence on social mixing patterns. Method. We have used Bayesian approach to quantify the disease outbreak through key epidemiological parameter basic reproduction number (R 0), using effective contacts, defined as sum of the product of incidence cases and probability of generation time distribution. We have estimated R 0 from daily case incidence data for pandemic influenza A/H1N1 2009 in India, for the initial phase. Result. The estimated R 0 with 95% credible interval is consistent with several other studies on the same strain. Through sensitivity analysis our study indicates that infectiousness affects the estimate of R 0. Conclusion. Basic reproduction number R 0 provides the useful information to the public health system to do some effort in controlling the disease by using mitigation strategies like vaccination, quarantine, and so forth. PMID:25784956
Toward quantifying the effectiveness of water trading under uncertainty.
Luo, B; Huang, G H; Zou, Y; Yin, Y Y
2007-04-01
This paper presents a methodology for quantifying the effectiveness of water-trading under uncertainty, by developing an optimization model based on the interval-parameter two-stage stochastic program (TSP) technique. In the study, the effectiveness of a water-trading program is measured by the water volume that can be released through trading from a statistical point of view. The methodology can also deal with recourse water allocation problems generated by randomness in water availability and, at the same time, tackle uncertainties expressed as intervals in the trading system. The developed methodology was tested with a hypothetical water-trading program in an agricultural system in the Swift Current Creek watershed, Canada. Study results indicate that the methodology can effectively measure the effectiveness of a trading program through estimating the water volume being released through trading in a long-term view. A sensitivity analysis was also conducted to analyze the effects of different trading costs on the trading program. It shows that the trading efforts would become ineffective when the trading costs are too high. The case study also demonstrates that the trading program is more effective in a dry season when total water availability is in shortage.
NASA Astrophysics Data System (ADS)
Demaria, Eleonora M.; Nijssen, Bart; Wagener, Thorsten
2007-06-01
Current land surface models use increasingly complex descriptions of the processes that they represent. Increase in complexity is accompanied by an increase in the number of model parameters, many of which cannot be measured directly at large spatial scales. A Monte Carlo framework was used to evaluate the sensitivity and identifiability of ten parameters controlling surface and subsurface runoff generation in the Variable Infiltration Capacity model (VIC). Using the Monte Carlo Analysis Toolbox (MCAT), parameter sensitivities were studied for four U.S. watersheds along a hydroclimatic gradient, based on a 20-year data set developed for the Model Parameter Estimation Experiment (MOPEX). Results showed that simulated streamflows are sensitive to three parameters when evaluated with different objective functions. Sensitivity of the infiltration parameter (b) and the drainage parameter (exp) were strongly related to the hydroclimatic gradient. The placement of vegetation roots played an important role in the sensitivity of model simulations to the thickness of the second soil layer (thick2). Overparameterization was found in the base flow formulation indicating that a simplified version could be implemented. Parameter sensitivity was more strongly dictated by climatic gradients than by changes in soil properties. Results showed how a complex model can be reduced to a more parsimonious form, leading to a more identifiable model with an increased chance of successful regionalization to ungauged basins. Although parameter sensitivities are strictly valid for VIC, this model is representative of a wider class of macroscale hydrological models. Consequently, the results and methodology will have applicability to other hydrological models.
Are quantitative sensitivity analysis methods always reliable?
NASA Astrophysics Data System (ADS)
Huang, X.
2016-12-01
Physical parameterizations developed to represent subgrid-scale physical processes include various uncertain parameters, leading to large uncertainties in today's Earth System Models (ESMs). Sensitivity Analysis (SA) is an efficient approach to quantitatively determine how the uncertainty of the evaluation metric can be apportioned to each parameter. Also, SA can identify the most influential parameters, as a result to reduce the high dimensional parametric space. In previous studies, some SA-based approaches, such as Sobol' and Fourier amplitude sensitivity testing (FAST), divide the parameters into sensitive and insensitive groups respectively. The first one is reserved but the other is eliminated for certain scientific study. However, these approaches ignore the disappearance of the interactive effects between the reserved parameters and the eliminated ones, which are also part of the total sensitive indices. Therefore, the wrong sensitive parameters might be identified by these traditional SA approaches and tools. In this study, we propose a dynamic global sensitivity analysis method (DGSAM), which iteratively removes the least important parameter until there are only two parameters left. We use the CLM-CASA, a global terrestrial model, as an example to verify our findings with different sample sizes ranging from 7000 to 280000. The result shows DGSAM has abilities to identify more influential parameters, which is confirmed by parameter calibration experiments using four popular optimization methods. For example, optimization using Top3 parameters filtered by DGSAM could achieve substantial improvement against Sobol' by 10%. Furthermore, the current computational cost for calibration has been reduced to 1/6 of the original one. In future, it is necessary to explore alternative SA methods emphasizing parameter interactions.
Sensitivity studies for a space-based methane lidar mission
NASA Astrophysics Data System (ADS)
Kiemle, C.; Quatrevalet, M.; Ehret, G.; Amediek, A.; Fix, A.; Wirth, M.
2011-10-01
Methane is the third most important greenhouse gas in the atmosphere after water vapour and carbon dioxide. A major handicap to quantify the emissions at the Earth's surface in order to better understand biosphere-atmosphere exchange processes and potential climate feedbacks is the lack of accurate and global observations of methane. Space-based integrated path differential absorption (IPDA) lidar has potential to fill this gap, and a Methane Remote Lidar Mission (MERLIN) on a small satellite in polar orbit was proposed by DLR and CNES in the frame of a German-French climate monitoring initiative. System simulations are used to identify key performance parameters and to find an advantageous instrument configuration, given the environmental, technological, and budget constraints. The sensitivity studies use representative averages of the atmospheric and surface state to estimate the measurement precision, i.e. the random uncertainty due to instrument noise. Key performance parameters for MERLIN are average laser power, telescope size, orbit height, surface reflectance, and detector noise. A modest-size lidar instrument with 0.45 W average laser power and 0.55 m telescope diameter on a 506 km orbit could provide 50-km averaged methane column measurement along the sub-satellite track with a precision of about 1% over vegetation. The use of a methane absorption trough at 1.65 μm improves the near-surface measurement sensitivity and vastly relaxes the wavelength stability requirement that was identified as one of the major technological risks in the pre-phase A studies for A-SCOPE, a space-based IPDA lidar for carbon dioxide at the European Space Agency. Minimal humidity and temperature sensitivity at this wavelength position will enable accurate measurements in tropical wetlands, key regions with largely uncertain methane emissions. In contrast to actual passive remote sensors, measurements in Polar Regions will be possible and biases due to aerosol layers and thin ice clouds will be minimised.
Sensitivity studies for a space-based methane lidar mission
NASA Astrophysics Data System (ADS)
Kiemle, C.; Quatrevalet, M.; Ehret, G.; Amediek, A.; Fix, A.; Wirth, M.
2011-06-01
Methane is the third most important greenhouse gas in the atmosphere after water vapour and carbon dioxide. A major handicap to quantify the emissions at the Earth's surface in order to better understand biosphere-atmosphere exchange processes and potential climate feedbacks is the lack of accurate and global observations of methane. Space-based integrated path differential absorption (IPDA) lidar has potential to fill this gap, and a Methane Remote Lidar Mission (MERLIN) on a small satellite in Polar orbit was proposed by DLR and CNES in the frame of a German-French climate monitoring initiative. System simulations are used to identify key performance parameters and to find an advantageous instrument configuration, given the environmental, technological, and budget constraints. The sensitivity studies use representative averages of the atmospheric and surface state to estimate the measurement precision, i.e. the random uncertainty due to instrument noise. Key performance parameters for MERLIN are average laser power, telescope size, orbit height, surface reflectance, and detector noise. A modest-size lidar instrument with 0.45 W average laser power and 0.55 m telescope diameter on a 506 km orbit could provide 50-km averaged methane column measurement along the sub-satellite track with a precision of about 1 % over vegetation. The use of a methane absorption trough at 1.65 μm improves the near-surface measurement sensitivity and vastly relaxes the wavelength stability requirement that was identified as one of the major technological risks in the pre-phase A studies for A-SCOPE, a space-based IPDA lidar for carbon dioxide at the European Space Agency. Minimal humidity and temperature sensitivity at this wavelength position will enable accurate measurements in tropical wetlands, key regions with largely uncertain methane emissions. In contrast to actual passive remote sensors, measurements in Polar Regions will be possible and biases due to aerosol layers and thin ice clouds will be minimised.
NASA Technical Reports Server (NTRS)
Laymon, Charles A.; Crosson, William L.; Jackson, Thomas J.; Manu, Andrew; Tsegaye, Teferi D.; Soman, V.; Arnold, James E. (Technical Monitor)
2001-01-01
Accurate estimates of spatially heterogeneous algorithm variables and parameters are required in determining the spatial distribution of soil moisture using radiometer data from aircraft and satellites. A ground-based experiment in passive microwave remote sensing of soil moisture was conducted in Huntsville, Alabama from July 1-14, 1996 to study retrieval algorithms and their sensitivity to variable and parameter specification. With high temporal frequency observations at S and L band, we were able to observe large scale moisture changes following irrigation and rainfall events, as well as diurnal behavior of surface moisture among three plots, one bare, one covered with short grass and another covered with alfalfa. The L band emitting depth was determined to be on the order of 0-3 or 0-5 cm below 0.30 cubic centimeter/cubic centimeter with an indication of a shallower emitting depth at higher moisture values. Surface moisture behavior was less apparent on the vegetated plots than it was on the bare plot because there was less moisture gradient and because of difficulty in determining vegetation water content and estimating the vegetation b parameter. Discrepancies between remotely sensed and gravimetric, soil moisture estimates on the vegetated plots point to an incomplete understanding of the requirements needed to correct for the effects of vegetation attenuation. Quantifying the uncertainty in moisture estimates is vital if applications are to utilize remotely-sensed soil moisture data. Computations based only on the real part of the complex dielectric constant and/or an alternative dielectric mixing model contribute a relatively insignificant amount of uncertainty to estimates of soil moisture. Rather, the retrieval algorithm is much more sensitive to soil properties, surface roughness and biomass.
Micro-anatomical quantitative optical imaging: toward automated assessment of breast tissues.
Dobbs, Jessica L; Mueller, Jenna L; Krishnamurthy, Savitri; Shin, Dongsuk; Kuerer, Henry; Yang, Wei; Ramanujam, Nirmala; Richards-Kortum, Rebecca
2015-08-20
Pathologists currently diagnose breast lesions through histologic assessment, which requires fixation and tissue preparation. The diagnostic criteria used to classify breast lesions are qualitative and subjective, and inter-observer discordance has been shown to be a significant challenge in the diagnosis of selected breast lesions, particularly for borderline proliferative lesions. Thus, there is an opportunity to develop tools to rapidly visualize and quantitatively interpret breast tissue morphology for a variety of clinical applications. Toward this end, we acquired images of freshly excised breast tissue specimens from a total of 34 patients using confocal fluorescence microscopy and proflavine as a topical stain. We developed computerized algorithms to segment and quantify nuclear and ductal parameters that characterize breast architectural features. A total of 33 parameters were evaluated and used as input to develop a decision tree model to classify benign and malignant breast tissue. Benign features were classified in tissue specimens acquired from 30 patients and malignant features were classified in specimens from 22 patients. The decision tree model that achieved the highest accuracy for distinguishing between benign and malignant breast features used the following parameters: standard deviation of inter-nuclear distance and number of duct lumens. The model achieved 81 % sensitivity and 93 % specificity, corresponding to an area under the curve of 0.93 and an overall accuracy of 90 %. The model classified IDC and DCIS with 92 % and 96 % accuracy, respectively. The cross-validated model achieved 75 % sensitivity and 93 % specificity and an overall accuracy of 88 %. These results suggest that proflavine staining and confocal fluorescence microscopy combined with image analysis strategies to segment morphological features could potentially be used to quantitatively diagnose freshly obtained breast tissue at the point of care without the need for tissue preparation.
Optical sensing of anticoagulation status: Towards point-of-care coagulation testing
Tripathi, Markandey M.; Hajjarian, Zeinab; Van Cott, Elizabeth M.; Nadkarni, Seemantini K.
2017-01-01
Anticoagulant overdose is associated with major bleeding complications. Rapid coagulation sensing may ensure safe and accurate anticoagulant dosing and reduce bleeding risk. Here, we report the novel use of Laser Speckle Rheology (LSR) for measuring anticoagulation and haemodilution status in whole blood. In the LSR approach, blood from 12 patients and 4 swine was placed in disposable cartridges and time-varying intensity fluctuations of laser speckle patterns were measured to quantify the viscoelastic modulus during clotting. Coagulation parameters, mainly clotting time, clot progression rate (α-angle) and maximum clot stiffness (MA) were derived from the clot viscoelasticity trace and compared with standard Thromboelastography (TEG). To demonstrate the capability for anticoagulation sensing in patients, blood samples from 12 patients treated with warfarin anticoagulant were analyzed. LSR clotting time correlated with prothrombin and activated partial thromboplastin time (r = 0.57–0.77, p<0.04) and all LSR parameters demonstrated good correlation with TEG (r = 0.61–0.87, p<0.04). To further evaluate the dose-dependent sensitivity of LSR parameters, swine blood was spiked with varying concentrations of heparin, argatroban and rivaroxaban or serially diluted with saline. We observed that anticoagulant treatments prolonged LSR clotting time in a dose-dependent manner that correlated closely with TEG (r = 0.99, p<0.01). LSR angle was unaltered by anticoagulation whereas TEG angle presented dose-dependent diminution likely linked to the mechanical manipulation of the clot. In both LSR and TEG, MA was largely unaffected by anticoagulation, and LSR presented a higher sensitivity to increased haemodilution in comparison to TEG (p<0.01). Our results establish that LSR rapidly and accurately measures the response of various anticoagulants, opening the opportunity for routine anticoagulation monitoring at the point-of-care or for patient self-testing. PMID:28771571
Quantitative OCT and MRI biomarkers for the differentiation of cartilage degeneration.
Nebelung, Sven; Brill, Nicolai; Tingart, Markus; Pufe, Thomas; Kuhl, Christiane; Jahr, Holger; Truhn, Daniel
2016-04-01
To evaluate the usefulness of quantitative parameters obtained by optical coherence tomography (OCT) and magnetic resonance imaging (MRI) in the comprehensive assessment of human articular cartilage degeneration. Human osteochondral samples of variable degeneration (n = 45) were obtained from total knee replacements and assessed by MRI sequences measuring T1, T1ρ, T2 and T2* relaxivity and by OCT-based quantification of irregularity (OII, optical irregularity index), homogeneity (OHI, optical homogeneity index]) and attenuation (OAI, optical attenuation index]). Samples were also assessed macroscopically (Outerbridge classification) and histologically (Mankin classification) as grade-0 (Mankin scores 0-4)/grade-I (scores 5-8)/grade-II (scores 9-10)/grade-III (score 11-14). After data normalisation, differences between Mankin grades and correlations between imaging parameters were assessed using ANOVA and Tukey's post-hoc test and Spearman's correlation coefficients, respectively. Sensitivities and specificities in the detection of Mankin grade-0 were calculated. Significant degeneration-related increases were found for T2 and OII and decreases for OAI, while T1, T1ρ, T2* or OHI did not reveal significant changes in relation to degeneration. A number of significant correlations between imaging parameters and histological (sub)scores were found, in particular for T2 and OII. Sensitivities and specificities in the detection of Mankin grade-0 were highest for OHI/T1 and OII/T1ρ, respectively. Quantitative OCT and MRI techniques seem to complement each other in the comprehensive assessment of cartilage degeneration. Sufficiently large structural and compositional changes in the extracellular matrix may thus be parameterized and quantified, while the detection of early degeneration remains challenging.
Sleep-Wake Evaluation from Whole-Night Non-Contact Audio Recordings of Breathing Sounds
Dafna, Eliran; Tarasiuk, Ariel; Zigel, Yaniv
2015-01-01
Study Objectives To develop and validate a novel non-contact system for whole-night sleep evaluation using breathing sounds analysis (BSA). Design Whole-night breathing sounds (using ambient microphone) and polysomnography (PSG) were simultaneously collected at a sleep laboratory (mean recording time 7.1 hours). A set of acoustic features quantifying breathing pattern were developed to distinguish between sleep and wake epochs (30 sec segments). Epochs (n = 59,108 design study and n = 68,560 validation study) were classified using AdaBoost classifier and validated epoch-by-epoch for sensitivity, specificity, positive and negative predictive values, accuracy, and Cohen's kappa. Sleep quality parameters were calculated based on the sleep/wake classifications and compared with PSG for validity. Setting University affiliated sleep-wake disorder center and biomedical signal processing laboratory. Patients One hundred and fifty patients (age 54.0±14.8 years, BMI 31.6±5.5 kg/m2, m/f 97/53) referred for PSG were prospectively and consecutively recruited. The system was trained (design study) on 80 subjects; validation study was blindly performed on the additional 70 subjects. Measurements and Results Epoch-by-epoch accuracy rate for the validation study was 83.3% with sensitivity of 92.2% (sleep as sleep), specificity of 56.6% (awake as awake), and Cohen's kappa of 0.508. Comparing sleep quality parameters of BSA and PSG demonstrate average error of sleep latency, total sleep time, wake after sleep onset, and sleep efficiency of 16.6 min, 35.8 min, and 29.6 min, and 8%, respectively. Conclusions This study provides evidence that sleep-wake activity and sleep quality parameters can be reliably estimated solely using breathing sound analysis. This study highlights the potential of this innovative approach to measure sleep in research and clinical circumstances. PMID:25710495
Comparing maximum intercuspal contacts of virtual dental patients and mounted dental casts.
Delong, Ralph; Ko, Ching-Chang; Anderson, Gary C; Hodges, James S; Douglas, W H
2002-12-01
Quantitative measures of occlusal contacts are of paramount importance in the study of chewing dysfunction. A tool is needed to identify and quantify occlusal parameters without occlusal interference caused by the technique of analysis. This laboratory simulation study compared occlusal contacts constructed from 3-dimensional images of dental casts and interocclusal records with contacts found by use of conventional methods. Dental casts of 10 completely dentate adults were mounted in a semi-adjustable Denar articulator. Maximum intercuspal contacts were marked on the casts using red film. Intercuspal records made with an experimental vinyl polysiloxane impression material recorded maximum intercuspation. Three-dimensional virtual models of the casts and interocclusal records were made using custom software and an optical scanner. Contacts were calculated between virtual casts aligned manually (CM), aligned with interocclusal records scanned seated on the mandibular casts (C1) or scanned independently (C2), and directly from virtual interocclusal records (IR). Sensitivity and specificity calculations used the marked contacts as the standard. Contact parameters were compared between method pairs. Statistical comparisons used analysis of variance and the Tukey-Kramer post hoc test (P=<.05). Sensitivities (range 0.76-0.89) did not differ significantly among the 4 methods (P=.14); however, specificities (range 0.89-0.98) were significantly lower for IR (P=.0001). Contact parameters of methods CM, C1, and C2 differed significantly from those of method IR (P<.02). The ranking based on method pair comparisons was C2/C1 > CM/C1 = CM/C2 > C2/IR > CM/IR > C1/IR, where ">" means "closer than." Within the limits of this study, occlusal contacts calculated from aligned virtual casts accurately reproduce articulator contacts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y.; Tong, C.; Trainor-Guitten, W. J.
The risk of CO 2 leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO 2 geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO 2 is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO 2/brine saturation are connected to the fault-leakage model as amore » boundary condition. CO 2 and brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO 2 plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO 2 and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.« less
NASA Astrophysics Data System (ADS)
Rajkumar, K. V.; Vaidyanathan, S.; Kumar, Anish; Jayakumar, T.; Raj, Baldev; Ray, K. K.
2007-05-01
The best combinations of mechanical properties (yield stress and fracture toughness) of M250 maraging steel is obtained through short-term thermal aging (3-10 h) at 755 K. This is attributed to the microstructure containing precipitation of intermetallic phases in austenite-free low-carbon martensite matrix. Over-aged microstructure, containing reverted austenite degrades the mechanical properties drastically. Hence, it necessitates identification of a suitable non-destructive evaluation (NDE) technique for detecting any reverted austenite unambiguously during aging. The influence of aging on microstructure, room temperature hardness and non-destructive magnetic parameters such as coercivity ( Hc), saturation magnetization ( Ms) and magnetic Barkhausen emission (MBE) RMS peak voltage is studied in order to derive correlations between these parameters in aged M250 maraging steel. Hardness was found to increase with precipitation of intermetallics during initial aging and decrease at longer durations due to austenite reversion. Among the different magnetic parameters studied, MBE RMS peak voltage was found to be very sensitive to austenite reversion (non-magnetic phase) as they decreased drastically up on initiation of austenite reversion. Hence, this parameter can be effectively utilized to detect and quantify the reverted austenite in maraging steel specimen. The present study clearly indicates that the combination of MBE RMS peak voltage and hardness can be used for unambiguous characterization of microstructural features of technological and practical importance (3-10 h of aging duration at 755 K) in M250 grade maraging steel.
NASA Astrophysics Data System (ADS)
Wang, S.; Huang, G. H.; Baetz, B. W.; Ancell, B. C.
2017-05-01
The particle filtering techniques have been receiving increasing attention from the hydrologic community due to its ability to properly estimate model parameters and states of nonlinear and non-Gaussian systems. To facilitate a robust quantification of uncertainty in hydrologic predictions, it is necessary to explicitly examine the forward propagation and evolution of parameter uncertainties and their interactions that affect the predictive performance. This paper presents a unified probabilistic framework that merges the strengths of particle Markov chain Monte Carlo (PMCMC) and factorial polynomial chaos expansion (FPCE) algorithms to robustly quantify and reduce uncertainties in hydrologic predictions. A Gaussian anamorphosis technique is used to establish a seamless bridge between the data assimilation using the PMCMC and the uncertainty propagation using the FPCE through a straightforward transformation of posterior distributions of model parameters. The unified probabilistic framework is applied to the Xiangxi River watershed of the Three Gorges Reservoir (TGR) region in China to demonstrate its validity and applicability. Results reveal that the degree of spatial variability of soil moisture capacity is the most identifiable model parameter with the fastest convergence through the streamflow assimilation process. The potential interaction between the spatial variability in soil moisture conditions and the maximum soil moisture capacity has the most significant effect on the performance of streamflow predictions. In addition, parameter sensitivities and interactions vary in magnitude and direction over time due to temporal and spatial dynamics of hydrologic processes.
Machine Learning Techniques for Global Sensitivity Analysis in Climate Models
NASA Astrophysics Data System (ADS)
Safta, C.; Sargsyan, K.; Ricciuto, D. M.
2017-12-01
Climate models studies are not only challenged by the compute intensive nature of these models but also by the high-dimensionality of the input parameter space. In our previous work with the land model components (Sargsyan et al., 2014) we identified subsets of 10 to 20 parameters relevant for each QoI via Bayesian compressive sensing and variance-based decomposition. Nevertheless the algorithms were challenged by the nonlinear input-output dependencies for some of the relevant QoIs. In this work we will explore a combination of techniques to extract relevant parameters for each QoI and subsequently construct surrogate models with quantified uncertainty necessary to future developments, e.g. model calibration and prediction studies. In the first step, we will compare the skill of machine-learning models (e.g. neural networks, support vector machine) to identify the optimal number of classes in selected QoIs and construct robust multi-class classifiers that will partition the parameter space in regions with smooth input-output dependencies. These classifiers will be coupled with techniques aimed at building sparse and/or low-rank surrogate models tailored to each class. Specifically we will explore and compare sparse learning techniques with low-rank tensor decompositions. These models will be used to identify parameters that are important for each QoI. Surrogate accuracy requirements are higher for subsequent model calibration studies and we will ascertain the performance of this workflow for multi-site ALM simulation ensembles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andre, M; University of California, San Diego, San Diego, CA; Heba, E
2015-06-15
Purpose: Nonalcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease in the United States, affects 30% of adult Americans and may progress to more serious diseases. Liver biopsy is the standard method for diagnosing NAFLD. MRI can accurately diagnose and quantify hepatic steatosis but is expensive. Sonography with qualitative interpretation by radiologists is lower cost, more accessible but less sensitive for detection. The objective of this study, using MRI proton density fat fraction (PDFF) as reference, is to assess the accuracy for diagnosing and quantifying steatosis with two quantitative US parameters-- backscatter coefficient (BSC) andmore » attenuation coefficient (AC)--derived from RF signals using the calibration phantom technique. Methods: We performed a prospective, cross-sectional analysis of a cohort of adults (n=204) with NAFLD (MRI-PDFF≥5%) and without NAFLD (controls). Subjects underwent MRI-PDFF and BSC and AC US analyses of the liver on the same day. Patients were randomly assigned to training (n=102, mean age 51±17 years, mean body mass index 31±7 kg/m{sup 2}) and validation (n=102, mean age 49±17 years, body mass index 30±6 kg/m{sup 2}) groups; 69% of patients in each group had NAFLD. Results: BSC provided AUC 0.98 (95% CI 0.95–1.00, p<0.0001) for diagnosis of NAFLD; the optimal BSC cut-off provided sensitivity, specificity, positive and negative predictive values (PPV, NPV) of 87%, 91%, 95%, and 76%, respectively. AC provided AUC 0.89 (95% CI 0.81–0.96, p<0.0001) for diagnosis of steatosis; the optimal AC cut-off provided sensitivity, specificity, PPV, NPV of 80%, 84%, 92%, and 66%, respectively. BSC and AC both correlated significantly with MRI-PDFF (P<0.0001). Conclusion: QUS BSC and AC can accurately diagnose and quantify hepatic steatosis, using MRI-PDFF as reference. With further validation, QUS may emerge as an inexpensive, widely available tool for NAFLD assessment. General support: NIH R01 CA111289, K23 -DK090303, AmerGastroAssoc Found, TF Williams Scholarship, S3000 scanner loaned by Siemens, Sucampo, JA Hartford Found, Atlantic Philanthropies Amer Gastroenterological Assoc. Agencies had no role in design/conduct of study, collection, management, analysis or interpretation of the data; preparation, review, or approval of the manuscript.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, S.; Toll, J.; Cothern, K.
1995-12-31
The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeong, Hyunjo, E-mail: hjjeong@wku.ac.kr; Cho, Sungjong; Zhang, Shuzeng
2016-04-15
In recent studies with nonlinear Rayleigh surface waves, harmonic generation measurements have been successfully employed to characterize material damage and microstructural changes, and found to be sensitive to early stages of damage process. A nonlinearity parameter of Rayleigh surface waves was derived and frequently measured to quantify the level of damage. The accurate measurement of the nonlinearity parameter generally requires making corrections for beam diffraction and medium attenuation. These effects are not generally known for nonlinear Rayleigh waves, and therefore not properly considered in most of previous studies. In this paper, the nonlinearity parameter for a Rayleigh surface wave ismore » defined from the plane wave displacement solutions. We explicitly define the attenuation and diffraction corrections for fundamental and second harmonic Rayleigh wave beams radiated from a uniform line source. Attenuation corrections are obtained from the quasilinear theory of plane Rayleigh wave equations. To obtain closed-form expressions for diffraction corrections, multi-Gaussian beam (MGB) models are employed to represent the integral solutions derived from the quasilinear theory of the full two-dimensional wave equation without parabolic approximation. Diffraction corrections are presented for a couple of transmitter-receiver geometries, and the effects of making attenuation and diffraction corrections are examined through the simulation of nonlinearity parameter determination in a solid sample.« less
An Embedded Sensory System for Worker Safety: Prototype Development and Evaluation
Cho, Chunhee; Park, JeeWoong
2018-01-01
At a construction site, workers mainly rely on two senses, which are sight and sound, in order to perceive their physical surroundings. However, they are often hindered by the nature of most construction sites, which are usually dynamic, loud, and complicated. To overcome these challenges, this research explored a method using an embedded sensory system that might offer construction workers an artificial sensing ability to better perceive their surroundings. This study identified three parameters (i.e., intensity, signal length, and delay between consecutive pulses) needed for tactile-based signals for the construction workers to communicate quickly. We developed a prototype system based on these parameters, conducted experimental studies to quantify and validate the sensitivity of the parameters for quick communication, and analyzed test data to reveal what was added by this method in order to perceive information from the tactile signals. The findings disclosed that the parameters of tactile-based signals and their distinguishable ranges could be perceived in a short amount of time (i.e., a fraction of a second). Further experimentation demonstrated the capability of the identified unit signals combined with a signal mapping technique to effectively deliver simple information to individuals and offer an additional sense of awareness to the surroundings. The findings of this study could serve as a basis for future research in exploring advanced tactile-based messages to overcome challenges in environments for which communication is a struggle. PMID:29662008
An Embedded Sensory System for Worker Safety: Prototype Development and Evaluation.
Cho, Chunhee; Park, JeeWoong
2018-04-14
At a construction site, workers mainly rely on two senses, which are sight and sound, in order to perceive their physical surroundings. However, they are often hindered by the nature of most construction sites, which are usually dynamic, loud, and complicated. To overcome these challenges, this research explored a method using an embedded sensory system that might offer construction workers an artificial sensing ability to better perceive their surroundings. This study identified three parameters (i.e., intensity, signal length, and delay between consecutive pulses) needed for tactile-based signals for the construction workers to communicate quickly. We developed a prototype system based on these parameters, conducted experimental studies to quantify and validate the sensitivity of the parameters for quick communication, and analyzed test data to reveal what was added by this method in order to perceive information from the tactile signals. The findings disclosed that the parameters of tactile-based signals and their distinguishable ranges could be perceived in a short amount of time (i.e., a fraction of a second). Further experimentation demonstrated the capability of the identified unit signals combined with a signal mapping technique to effectively deliver simple information to individuals and offer an additional sense of awareness to the surroundings. The findings of this study could serve as a basis for future research in exploring advanced tactile-based messages to overcome challenges in environments for which communication is a struggle.
On the direct detection of multi-component dark matter: sensitivity studies and parameter estimation
NASA Astrophysics Data System (ADS)
Herrero-Garcia, Juan; Scaffidi, Andre; White, Martin; Williams, Anthony G.
2017-11-01
We study the case of multi-component dark matter, in particular how direct detection signals are modified in the presence of several stable weakly-interacting-massive particles. Assuming a positive signal in a future direct detection experiment, stemming from two dark matter components, we study the region in parameter space where it is possible to distinguish a one from a two-component dark matter spectrum. First, we leave as free parameters the two dark matter masses and show that the two hypotheses can be significantly discriminated for a range of dark matter masses with their splitting being the critical factor. We then investigate how including the effects of different interaction strengths, local densities or velocity dispersions for the two components modifies these conclusions. We also consider the case of isospin-violating couplings. In all scenarios, we show results for various types of nuclei both for elastic spin-independent and spin-dependent interactions. Finally, assuming that the two-component hypothesis is confirmed, we quantify the accuracy with which the parameters can be extracted and discuss the different degeneracies that occur. This includes studying the case in which only a single experiment observes a signal, and also the scenario of having two signals from two different experiments, in which case the ratios of the couplings to neutrons and protons may also be extracted.
Castro Sanchez, Amparo Yovanna; Aerts, Marc; Shkedy, Ziv; Vickerman, Peter; Faggiano, Fabrizio; Salamina, Guiseppe; Hens, Niel
2013-03-01
The hepatitis C virus (HCV) and the human immunodeficiency virus (HIV) are a clear threat for public health, with high prevalences especially in high risk groups such as injecting drug users. People with HIV infection who are also infected by HCV suffer from a more rapid progression to HCV-related liver disease and have an increased risk for cirrhosis and liver cancer. Quantifying the impact of HIV and HCV co-infection is therefore of great importance. We propose a new joint mathematical model accounting for co-infection with the two viruses in the context of injecting drug users (IDUs). Statistical concepts and methods are used to assess the model from a statistical perspective, in order to get further insights in: (i) the comparison and selection of optional model components, (ii) the unknown values of the numerous model parameters, (iii) the parameters to which the model is most 'sensitive' and (iv) the combinations or patterns of values in the high-dimensional parameter space which are most supported by the data. Data from a longitudinal study of heroin users in Italy are used to illustrate the application of the proposed joint model and its statistical assessment. The parameters associated with contact rates (sharing syringes) and the transmission rates per syringe-sharing event are shown to play a major role. Copyright © 2013 Elsevier B.V. All rights reserved.
A Gaussian Approximation Approach for Value of Information Analysis.
Jalal, Hawre; Alarid-Escudero, Fernando
2018-02-01
Most decisions are associated with uncertainty. Value of information (VOI) analysis quantifies the opportunity loss associated with choosing a suboptimal intervention based on current imperfect information. VOI can inform the value of collecting additional information, resource allocation, research prioritization, and future research designs. However, in practice, VOI remains underused due to many conceptual and computational challenges associated with its application. Expected value of sample information (EVSI) is rooted in Bayesian statistical decision theory and measures the value of information from a finite sample. The past few years have witnessed a dramatic growth in computationally efficient methods to calculate EVSI, including metamodeling. However, little research has been done to simplify the experimental data collection step inherent to all EVSI computations, especially for correlated model parameters. This article proposes a general Gaussian approximation (GA) of the traditional Bayesian updating approach based on the original work by Raiffa and Schlaifer to compute EVSI. The proposed approach uses a single probabilistic sensitivity analysis (PSA) data set and involves 2 steps: 1) a linear metamodel step to compute the EVSI on the preposterior distributions and 2) a GA step to compute the preposterior distribution of the parameters of interest. The proposed approach is efficient and can be applied for a wide range of data collection designs involving multiple non-Gaussian parameters and unbalanced study designs. Our approach is particularly useful when the parameters of an economic evaluation are correlated or interact.
NASA Astrophysics Data System (ADS)
Yan, Fang; Winijkul, Ekbordin; Bond, Tami C.; Streets, David G.
2014-04-01
Estimates of future emissions are necessary for understanding the future health of the atmosphere, designing national and international strategies for air quality control, and evaluating mitigation policies. Emission inventories are uncertain and future projections even more so, thus it is important to quantify the uncertainty inherent in emission projections. This paper is the second in a series that seeks to establish a more mechanistic understanding of future air pollutant emissions based on changes in technology. The first paper in this series (Yan et al., 2011) described a model that projects emissions based on dynamic changes of vehicle fleet, Speciated Pollutant Emission Wizard-Trend, or SPEW-Trend. In this paper, we explore the underlying uncertainties of global and regional exhaust PM emission projections from on-road vehicles in the coming decades using sensitivity analysis and Monte Carlo simulation. This work examines the emission sensitivities due to uncertainties in retirement rate, timing of emission standards, transition rate of high-emitting vehicles called “superemitters”, and emission factor degradation rate. It is concluded that global emissions are most sensitive to parameters in the retirement rate function. Monte Carlo simulations show that emission uncertainty caused by lack of knowledge about technology composition is comparable to the uncertainty demonstrated by alternative economic scenarios, especially during the period 2010-2030.
Boundary overlap for medical image segmentation evaluation
NASA Astrophysics Data System (ADS)
Yeghiazaryan, Varduhi; Voiculescu, Irina
2017-03-01
All medical image segmentation algorithms need to be validated and compared, and yet no evaluation framework is widely accepted within the imaging community. Collections of segmentation results often need to be compared and ranked by their effectiveness. Evaluation measures which are popular in the literature are based on region overlap or boundary distance. None of these are consistent in the way they rank segmentation results: they tend to be sensitive to one or another type of segmentation error (size, location, shape) but no single measure covers all error types. We introduce a new family of measures, with hybrid characteristics. These measures quantify similarity/difference of segmented regions by considering their overlap around the region boundaries. This family is more sensitive than other measures in the literature to combinations of segmentation error types. We compare measure performance on collections of segmentation results sourced from carefully compiled 2D synthetic data, and also on 3D medical image volumes. We show that our new measure: (1) penalises errors successfully, especially those around region boundaries; (2) gives a low similarity score when existing measures disagree, thus avoiding overly inflated scores; and (3) scores segmentation results over a wider range of values. We consider a representative measure from this family and the effect of its only free parameter on error sensitivity, typical value range, and running time.
NASA Astrophysics Data System (ADS)
Cambaliza, M. O. L.; Shepson, P. B.; Caulton, D. R.; Stirm, B.; Samarov, D.; Gurney, K. R.; Turnbull, J.; Davis, K. J.; Possolo, A.; Karion, A.; Sweeney, C.; Moser, B.; Hendricks, A.; Lauvaux, T.; Mays, K.; Whetstone, J.; Huang, J.; Razlivanov, I.; Miles, N. L.; Richardson, S. J.
2014-09-01
Urban environments are the primary contributors to global anthropogenic carbon emissions. Because much of the growth in CO2 emissions will originate from cities, there is a need to develop, assess, and improve measurement and modeling strategies for quantifying and monitoring greenhouse gas emissions from large urban centers. In this study the uncertainties in an aircraft-based mass balance approach for quantifying carbon dioxide and methane emissions from an urban environment, focusing on Indianapolis, IN, USA, are described. The relatively level terrain of Indianapolis facilitated the application of mean wind fields in the mass balance approach. We investigate the uncertainties in our aircraft-based mass balance approach by (1) assessing the sensitivity of the measured flux to important measurement and analysis parameters including wind speed, background CO2 and CH4, boundary layer depth, and interpolation technique, and (2) determining the flux at two or more downwind distances from a point or area source (with relatively large source strengths such as solid waste facilities and a power generating station) in rapid succession, assuming that the emission flux is constant. When we quantify the precision in the approach by comparing the estimated emissions derived from measurements at two or more downwind distances from an area or point source, we find that the minimum and maximum repeatability were 12 and 52%, with an average of 31%. We suggest that improvements in the experimental design can be achieved by careful determination of the background concentration, monitoring the evolution of the boundary layer through the measurement period, and increasing the number of downwind horizontal transect measurements at multiple altitudes within the boundary layer.
Effect of soil structure on the growth of bacteria in soil quantified using CARD-FISH
NASA Astrophysics Data System (ADS)
Juyal, Archana; Eickhorst, Thilo; Falconer, Ruth; Otten, Wilfred
2014-05-01
It has been reported that compaction of soil due to use of heavy machinery has resulted in the reduction of crop yield. Compaction affects the physical properties of soil such as bulk density, soil strength and porosity. This causes an alteration in the soil structure which limits the mobility of nutrients, water and air infiltration and root penetration in soil. Several studies have been conducted to explore the effect of soil compaction on plant growth and development. However, there is scant information on the effect of soil compaction on the microbial community and its activities in soil. Understanding the effect of soil compaction on microbial community is essential as microbial activities are very sensitive to abrupt environmental changes in soil. Therefore, the aim of this work was to investigate the effect of soil structure on growth of bacteria in soil. The bulk density of soil was used as a soil physical parameter to quantify the effect of soil compaction. To detect and quantify bacteria in soil the method of catalyzed reporter deposition-fluorescence in situ hybridization (CARD-FISH) was used. This technique results in high intensity fluorescent signals which make it easy to quantify bacteria against high levels of autofluorescence emitted by soil particles and organic matter. In this study, bacterial strains Pseudomonas fluorescens SBW25 and Bacillus subtilis DSM10 were used. Soils of aggregate size 2-1mm were packed at five different bulk densities in polyethylene rings (4.25 cm3).The soil rings were sampled at four different days. Results showed that the total number of bacteria counts was reduced significantly (P
NASA Astrophysics Data System (ADS)
Sun, Guodong; Mu, Mu
2017-05-01
An important source of uncertainty, which causes further uncertainty in numerical simulations, is that residing in the parameters describing physical processes in numerical models. Therefore, finding a subset among numerous physical parameters in numerical models in the atmospheric and oceanic sciences, which are relatively more sensitive and important parameters, and reducing the errors in the physical parameters in this subset would be a far more efficient way to reduce the uncertainties involved in simulations. In this context, we present a new approach based on the conditional nonlinear optimal perturbation related to parameter (CNOP-P) method. The approach provides a framework to ascertain the subset of those relatively more sensitive and important parameters among the physical parameters. The Lund-Potsdam-Jena (LPJ) dynamical global vegetation model was utilized to test the validity of the new approach in China. The results imply that nonlinear interactions among parameters play a key role in the identification of sensitive parameters in arid and semi-arid regions of China compared to those in northern, northeastern, and southern China. The uncertainties in the numerical simulations were reduced considerably by reducing the errors of the subset of relatively more sensitive and important parameters. The results demonstrate that our approach not only offers a new route to identify relatively more sensitive and important physical parameters but also that it is viable to then apply "target observations" to reduce the uncertainties in model parameters.
DOT National Transportation Integrated Search
2012-01-01
OVERVIEW OF PRESENTATION : Evaluation Parameters : EPAs Sensitivity Analysis : Comparison to Baseline Case : MOVES Sensitivity Run Specification : MOVES Sensitivity Input Parameters : Results : Uses of Study
Sensitivity-Uncertainty Techniques for Nuclear Criticality Safety
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise
2017-08-07
The sensitivity and uncertainty analysis course will introduce students to k eff sensitivity data, cross-section uncertainty data, how k eff sensitivity data and k eff uncertainty data are generated and how they can be used. Discussion will include how sensitivity/uncertainty data can be used to select applicable critical experiments, to quantify a defensible margin to cover validation gaps and weaknesses, and in development of upper subcritical limits.
Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries
Lu, Zhiming
2018-01-30
Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less
Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Zhiming
Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less
Evolution of Geometric Sensitivity Derivatives from Computer Aided Design Models
NASA Technical Reports Server (NTRS)
Jones, William T.; Lazzara, David; Haimes, Robert
2010-01-01
The generation of design parameter sensitivity derivatives is required for gradient-based optimization. Such sensitivity derivatives are elusive at best when working with geometry defined within the solid modeling context of Computer-Aided Design (CAD) systems. Solid modeling CAD systems are often proprietary and always complex, thereby necessitating ad hoc procedures to infer parameter sensitivity. A new perspective is presented that makes direct use of the hierarchical associativity of CAD features to trace their evolution and thereby track design parameter sensitivity. In contrast to ad hoc methods, this method provides a more concise procedure following the model design intent and determining the sensitivity of CAD geometry directly to its respective defining parameters.
Hall, Gunnsteinn; Liang, Wenxuan; Li, Xingde
2017-10-01
Collagen fiber alignment derived from second harmonic generation (SHG) microscopy images can be important for disease diagnostics. Image processing algorithms are needed to robustly quantify the alignment in images with high sensitivity and reliability. Fourier transform (FT) magnitude, 2D power spectrum, and image autocorrelation have previously been used to extract fiber information from images by assuming a certain mathematical model (e.g. Gaussian distribution of the fiber-related parameters) and fitting. The fitting process is slow and fails to converge when the data is not Gaussian. Herein we present an efficient constant-time deterministic algorithm which characterizes the symmetricity of the FT magnitude image in terms of a single parameter, named the fiber alignment anisotropy R ranging from 0 (randomized fibers) to 1 (perfect alignment). This represents an important improvement of the technology and may bring us one step closer to utilizing the technology for various applications in real time. In addition, we present a digital image phantom-based framework for characterizing and validating the algorithm, as well as assessing the robustness of the algorithm against different perturbations.
Hamilton, Lindsay; Franklin, Robin J M; Jeffery, Nick D
2007-09-18
Clinical spinal cord injury in domestic dogs provides a model population in which to test the efficacy of putative therapeutic interventions for human spinal cord injury. To achieve this potential a robust method of functional analysis is required so that statistical comparison of numerical data derived from treated and control animals can be achieved. In this study we describe the use of digital motion capture equipment combined with mathematical analysis to derive a simple quantitative parameter - 'the mean diagonal coupling interval' - to describe coordination between forelimb and hindlimb movement. In normal dogs this parameter is independent of size, conformation, speed of walking or gait pattern. We show here that mean diagonal coupling interval is highly sensitive to alterations in forelimb-hindlimb coordination in dogs that have suffered spinal cord injury, and can be accurately quantified, but is unaffected by orthopaedic perturbations of gait. Mean diagonal coupling interval is an easily derived, highly robust measurement that provides an ideal method to compare the functional effect of therapeutic interventions after spinal cord injury in quadrupeds.
Introduction and application of the multiscale coefficient of variation analysis.
Abney, Drew H; Kello, Christopher T; Balasubramaniam, Ramesh
2017-10-01
Quantifying how patterns of behavior relate across multiple levels of measurement typically requires long time series for reliable parameter estimation. We describe a novel analysis that estimates patterns of variability across multiple scales of analysis suitable for time series of short duration. The multiscale coefficient of variation (MSCV) measures the distance between local coefficient of variation estimates within particular time windows and the overall coefficient of variation across all time samples. We first describe the MSCV analysis and provide an example analytical protocol with corresponding MATLAB implementation and code. Next, we present a simulation study testing the new analysis using time series generated by ARFIMA models that span white noise, short-term and long-term correlations. The MSCV analysis was observed to be sensitive to specific parameters of ARFIMA models varying in the type of temporal structure and time series length. We then apply the MSCV analysis to short time series of speech phrases and musical themes to show commonalities in multiscale structure. The simulation and application studies provide evidence that the MSCV analysis can discriminate between time series varying in multiscale structure and length.
Seasonal occurrences of ostracodes in lakes and streams of the San Francisco Peninsula, California
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carter, C.
1991-09-01
Fresh-water ostracodes from eight different sites on the San Francisco Peninsula were sampled periodically between May 1989 and May 1990. Seasonal variations in the relative abundances of ostracode species were observed. Those changes are believed to have been due, at least in part, to seasonal changes in water and sediment temperatures. Ostracodes are bivalved crustaceans with calcite carapaces that live in most aquatic environments, from the oceans to ditches and seeps. Each environment has its own set of physical and chemical parameters and hosts its own characteristic species of ostracodes because ostracodes are sensitive to these parameters. Ostracodes can bemore » used as environmental indicators. Fresh-water ostracodes are good indicators of water chemistry and thus of the local climate. Although ostracode biology is poorly known, it is known that ostracode life cycles are temperature dependent and that therefore ostracode populations exhibit seasonal fluctuations. This study is an effort to document and quantify the seasonal fluctuations in a few California ostracode populations in terms of the relative abundances of the individual species comprising the total population. 3 refs., 24 figs.« less
Bayesian Abel Inversion in Quantitative X-Ray Radiography
Howard, Marylesa; Fowler, Michael; Luttman, Aaron; ...
2016-05-19
A common image formation process in high-energy X-ray radiography is to have a pulsed power source that emits X-rays through a scene, a scintillator that absorbs X-rays and uoresces in the visible spectrum in response to the absorbed photons, and a CCD camera that images the visible light emitted from the scintillator. The intensity image is related to areal density, and, for an object that is radially symmetric about a central axis, the Abel transform then gives the object's volumetric density. Two of the primary drawbacks to classical variational methods for Abel inversion are their sensitivity to the type andmore » scale of regularization chosen and the lack of natural methods for quantifying the uncertainties associated with the reconstructions. In this work we cast the Abel inversion problem within a statistical framework in order to compute volumetric object densities from X-ray radiographs and to quantify uncertainties in the reconstruction. A hierarchical Bayesian model is developed with a likelihood based on a Gaussian noise model and with priors placed on the unknown density pro le, the data precision matrix, and two scale parameters. This allows the data to drive the localization of features in the reconstruction and results in a joint posterior distribution for the unknown density pro le, the prior parameters, and the spatial structure of the precision matrix. Results of the density reconstructions and pointwise uncertainty estimates are presented for both synthetic signals and real data from a U.S. Department of Energy X-ray imaging facility.« less
The point-spread function measure of resolution for the 3-D electrical resistivity experiment
NASA Astrophysics Data System (ADS)
Oldenborger, Greg A.; Routh, Partha S.
2009-02-01
The solution appraisal component of the inverse problem involves investigation of the relationship between our estimated model and the actual model. However, full appraisal is difficult for large 3-D problems such as electrical resistivity tomography (ERT). We tackle the appraisal problem for 3-D ERT via the point-spread functions (PSFs) of the linearized resolution matrix. The PSFs represent the impulse response of the inverse solution and quantify our parameter-specific resolving capability. We implement an iterative least-squares solution of the PSF for the ERT experiment, using on-the-fly calculation of the sensitivity via an adjoint integral equation with stored Green's functions and subgrid reduction. For a synthetic example, analysis of individual PSFs demonstrates the truly 3-D character of the resolution. The PSFs for the ERT experiment are Gaussian-like in shape, with directional asymmetry and significant off-diagonal features. Computation of attributes representative of the blurring and localization of the PSF reveal significant spatial dependence of the resolution with some correlation to the electrode infrastructure. Application to a time-lapse ground-water monitoring experiment demonstrates the utility of the PSF for assessing feature discrimination, predicting artefacts and identifying model dependence of resolution. For a judicious selection of model parameters, we analyse the PSFs and their attributes to quantify the case-specific localized resolving capability and its variability over regions of interest. We observe approximate interborehole resolving capability of less than 1-1.5m in the vertical direction and less than 1-2.5m in the horizontal direction. Resolving capability deteriorates significantly outside the electrode infrastructure.
NASA Astrophysics Data System (ADS)
Salman Shahid, Syed; Gaul, Robert T.; Kerskens, Christian; Flamini, Vittoria; Lally, Caitríona
2017-12-01
Diffusion magnetic resonance imaging (dMRI) can provide insights into the microstructure of intact arterial tissue. The current study employed high magnetic field MRI to obtain ultra-high resolution dMRI at an isotropic voxel resolution of 117 µm3 in less than 2 h of scan time. A parameter selective single shell (128 directions) diffusion-encoding scheme based on Stejskel-Tanner sequence with echo-planar imaging (EPI) readout was used. EPI segmentation was used to reduce the echo time (TE) and to minimise the susceptibility-induced artefacts. The study utilised the dMRI analysis with diffusion tensor imaging (DTI) framework to investigate structural heterogeneity in intact arterial tissue and to quantify variations in tissue composition when the tissue is cut open and flattened. For intact arterial samples, the region of interest base comparison showed significant differences in fractional anisotropy and mean diffusivity across the media layer (p < 0.05). For open cut flat samples, DTI based directionally invariant indices did not show significant differences across the media layer. For intact samples, fibre tractography based indices such as calculated helical angle and fibre dispersion showed near circumferential alignment and a high degree of fibre dispersion, respectively. This study demonstrates the feasibility of fast dMRI acquisition with ultra-high spatial and angular resolution at 7 T. Using the optimised sequence parameters, this study shows that DTI based markers are sensitive to local structural changes in intact arterial tissue samples and these markers may have clinical relevance in the diagnosis of atherosclerosis and aneurysm.
NASA Astrophysics Data System (ADS)
Sanchez, A. R.; Laguna, A.; Reimann, T.; Giráldez, J. V.; Peña, A.; Wallinga, J.; Vanwalleghem, T.
2017-12-01
Different geomorphological processes such as bioturbation and erosion-deposition intervene in soil formation and landscape evolution. The latter processes produce the alteration and degradation of the materials that compose the rocks. The degree to which the bedrock is weathered is estimated through the fraction of the bedrock which is mixing in the soil either vertically or laterally. This study presents an analytical solution for the diffusion-advection equation to quantify bioturbation and erosion-depositions rates in profiles along a catena. The model is calibrated with age-depth data obtained from profiles using the luminescence dating based on single grain Infrared Stimulated Luminescence (IRSL). Luminescence techniques contribute to a direct measurement of the bioturbation and erosion-deposition processes. Single-grain IRSL techniques is applied to feldspar minerals of fifteen samples which were collected from four soil profiles at different depths along a catena in Santa Clotilde Critical Zone Observatory, Cordoba province, SE Spain. A sensitivity analysis is studied to know the importance of the parameters in the analytical model. An uncertainty analysis is carried out to stablish the better fit of the parameters to the measured age-depth data. The results indicate a diffusion constant at 20 cm in depth of 47 (mm2/year) in the hill-base profile and 4.8 (mm2/year) in the hilltop profile. The model has high uncertainty in the estimation of erosion and deposition rates. This study reveals the potential of luminescence single-grain techniques to quantify pedoturbation processes.
NASA Astrophysics Data System (ADS)
Mohanty, B.; Jena, S.; Panda, R. K.
2016-12-01
The overexploitation of groundwater elicited in abandoning several shallow tube wells in the study Basin in Eastern India. For the sustainability of groundwater resources, basin-scale modelling of groundwater flow is indispensable for the effective planning and management of the water resources. The basic intent of this study is to develop a 3-D groundwater flow model of the study basin using the Visual MODFLOW Flex 2014.2 package and successfully calibrate and validate the model using 17 years of observed data. The sensitivity analysis was carried out to quantify the susceptibility of aquifer system to the river bank seepage, recharge from rainfall and agriculture practices, horizontal and vertical hydraulic conductivities, and specific yield. To quantify the impact of parameter uncertainties, Sequential Uncertainty Fitting Algorithm (SUFI-2) and Markov chain Monte Carlo (McMC) techniques were implemented. Results from the two techniques were compared and the advantages and disadvantages were analysed. Nash-Sutcliffe coefficient (NSE), Coefficient of Determination (R2), Mean Absolute Error (MAE), Mean Percent Deviation (Dv) and Root Mean Squared Error (RMSE) were adopted as criteria of model evaluation during calibration and validation of the developed model. NSE, R2, MAE, Dv and RMSE values for groundwater flow model during calibration and validation were in acceptable range. Also, the McMC technique was able to provide more reasonable results than SUFI-2. The calibrated and validated model will be useful to identify the aquifer properties, analyse the groundwater flow dynamics and the change in groundwater levels in future forecasts.
Baeßler, Bettina; Schaarschmidt, Frank; Treutlein, Melanie; Stehning, Christian; Schnackenburg, Bernhard; Michels, Guido; Maintz, David; Bunck, Alexander C
2017-12-01
To re-evaluate a recently suggested approach of quantifying myocardial oedema and increased tissue inhomogeneity in myocarditis by T2-mapping. Cardiac magnetic resonance data of 99 patients with myocarditis were retrospectively analysed. Thirthy healthy volunteers served as controls. T2-mapping data were acquired at 1.5 T using a gradient-spin-echo T2-mapping sequence. T2-maps were segmented according to the 16-segments AHA-model. Segmental T2-values, segmental pixel-standard deviation (SD) and the derived parameters maxT2, maxSD and madSD were analysed and compared to the established Lake Louise criteria (LLC). A re-estimation of logistic regression models revealed that all models containing an SD-parameter were superior to any model containing global myocardial T2. Using a combined cut-off of 1.8 ms for madSD + 68 ms for maxT2 resulted in a diagnostic sensitivity of 75% and specificity of 80% and showed a similar diagnostic performance compared to LLC in receiver-operating-curve analyses. Combining madSD, maxT2 and late gadolinium enhancement (LGE) in a model resulted in a superior diagnostic performance compared to LLC (sensitivity 93%, specificity 83%). The results show that the novel T2-mapping-derived parameters exhibit an additional diagnostic value over LGE with the inherent potential to overcome the current limitations of T2-mapping. • A novel quantitative approach to myocardial oedema imaging in myocarditis was re-evaluated. • The T2-mapping-derived parameters maxT2 and madSD were compared to traditional Lake-Louise criteria. • Using maxT2 and madSD with dedicated cut-offs performs similarly to Lake-Louise criteria. • Adding maxT2 and madSD to LGE results in further increased diagnostic performance. • This novel approach has the potential to overcome the limitations of T2-mapping.
NASA Astrophysics Data System (ADS)
Ha, Taesung
A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.
Quantifying the impact of land use change on hydrological responses in the Upper Ganga Basin, India
NASA Astrophysics Data System (ADS)
Tsarouchi, Georgia-Marina; Mijic, Ana; Moulds, Simon; Chawla, Ila; Mujumdar, Pradeep; Buytaert, Wouter
2013-04-01
Quantifying how changes in land use affect the hydrological response at the river basin scale is a challenge in hydrological science and especially in the tropics where many regions are considered data sparse. Earlier work by the authors developed and used high-resolution, reconstructed land cover maps for northern India, based on satellite imagery and historic land-use maps for the years 1984, 1998 and 2010. Large-scale land use changes and their effects on landscape patterns can impact water supply in a watershed by altering hydrological processes such as evaporation, infiltration, surface runoff, groundwater discharge and stream flow. Three land use scenarios were tested to explore the sensitivity of the catchment's response to land use changes: (a) historic land use of 1984 with integrated evolution to 2010; (b) land use of 2010 remaining stable; and (c) hypothetical future projection of land use for 2030. The future scenario was produced with Markov chain analysis and generation of transition probability matrices, indicating transition potentials from one land use class to another. The study used socio-economic (population density), geographic (distances to roads and rivers, and location of protected areas) and biophysical drivers (suitability of soil for agricultural production, slope, aspect, and elevation). The distributed version of the land surface model JULES was integrated at a resolution of 0.01° for the years 1984 to 2030. Based on a sensitivity analysis, the most sensitive parameters were identified. Then, the model was calibrated against measured daily stream flow data. The impact of land use changes was investigated by calculating annual variations in hydrological components, differences in annual stream flow and surface runoff during the simulation period. The land use changes correspond to significant differences on the long-term hydrologic fluxes for each scenario. Once analysed from a future water resources perspective, the results will be beneficial in constructing decision support tools for regional land-use planning and management.
NASA Astrophysics Data System (ADS)
Djomo, S. Njakou; Knudsen, M. T.; Andersen, M. S.; Hermansen, J. E.
2017-11-01
There is an ongoing debate regarding the influence of the source location of pollution on the fate of pollutants and their subsequent impacts. Several methods have been developed to derive site-dependent characterization factors (CFs) for use in life-cycle assessment (LCA). Consistent, precise, and accurate estimates of CFs are crucial for establishing long-term, sustainable air pollution abatement policies. We reviewed currently available studies on the regionalization of non-toxic air pollutants in LCA. We also extracted and converted data into indices for analysis. We showed that CFs can distinguish between emissions occurring in different locations, and that the different methods used to derive CFs map locations consistently from very sensitive to less sensitive. Seasonal variations are less important for the computation of CFs for acidification and eutrophication, but they are relevant for the calculation of CFs for tropospheric ozone formation. Large intra-country differences in estimated CFs suggest that an abatement policy relying on quantitative estimates based upon a single method may have undesirable outcomes. Within country differences in estimates of CFs for acidification and eutrophication are the results of the models used, category definitions, soil sensitivity factors, background emission concentration, critical loads database, and input data. Striking features in these studies were the lack of CFs for countries outside Europe, the USA, Japan, and Canada, the lack of quantification of uncertainties. Parameter and input data uncertainties are well quantified, but the uncertainty associated with the choice of category indicator is rarely quantified and this can be significant. Although CFs are scientifically robust, further refinements are needed before they can be integrated in LCA. Future research should include uncertainty analyses, and should develop a consensus model for CFs. CFs for countries outside Europe, Japan, Canada and the USA are urgently needed.
Image-Based Modeling of Blood Flow and Oxygen Transfer in Feto-Placental Capillaries
Brownbill, Paul; Janáček, Jiří; Jirkovská, Marie; Kubínová, Lucie; Chernyavsky, Igor L.; Jensen, Oliver E.
2016-01-01
During pregnancy, oxygen diffuses from maternal to fetal blood through villous trees in the placenta. In this paper, we simulate blood flow and oxygen transfer in feto-placental capillaries by converting three-dimensional representations of villous and capillary surfaces, reconstructed from confocal laser scanning microscopy, to finite-element meshes, and calculating values of vascular flow resistance and total oxygen transfer. The relationship between the total oxygen transfer rate and the pressure drop through the capillary is shown to be captured across a wide range of pressure drops by physical scaling laws and an upper bound on the oxygen transfer rate. A regression equation is introduced that can be used to estimate the oxygen transfer in a capillary using the vascular resistance. Two techniques for quantifying the effects of statistical variability, experimental uncertainty and pathological placental structure on the calculated properties are then introduced. First, scaling arguments are used to quantify the sensitivity of the model to uncertainties in the geometry and the parameters. Second, the effects of localized dilations in fetal capillaries are investigated using an idealized axisymmetric model, to quantify the possible effect of pathological placental structure on oxygen transfer. The model predicts how, for a fixed pressure drop through a capillary, oxygen transfer is maximized by an optimal width of the dilation. The results could explain the prevalence of fetal hypoxia in cases of delayed villous maturation, a pathology characterized by a lack of the vasculo-syncytial membranes often seen in conjunction with localized capillary dilations. PMID:27788214
Time-dependent diffusion MRI in cancer: tissue modeling and applications
NASA Astrophysics Data System (ADS)
Reynaud, Olivier
2017-11-01
In diffusion weighted imaging (DWI), the apparent diffusion coefficient has been recognized as a useful and sensitive surrogate for cell density, paving the way for non-invasive tumor staging, and characterization of treatment efficacy in cancer. However, microstructural parameters, such as cell size, density and/or compartmental diffusivities affect diffusion in various fashions, making of conventional DWI a sensitive but non-specific probe into changes happening at cellular level. Alternatively, tissue complexity can be probed and quantified using the time dependence of diffusion metrics, sometimes also referred to as temporal diffusion spectroscopy when only using oscillating diffusion gradients. Time-dependent diffusion (TDD) is emerging as a strong candidate for specific and non-invasive tumor characterization. Despite the lack of a general analytical solution for all diffusion times / frequencies, TDD can be probed in various regimes where systems simplify in order to extract relevant information about tissue microstructure. The fundamentals of TDD are first reviewed (a) in the short time regime, disentangling structural and diffusive tissue properties, and (b) near the tortuosity limit, assuming weakly heterogeneous media near infinitely long diffusion times. Focusing on cell bodies (as opposed to neuronal tracts), a simple but realistic model for intracellular diffusion can offer precious insight on diffusion inside biological systems, at all times. Based on this approach, the main three geometrical models implemented so far (IMPULSED, POMACE, VERDICT) are reviewed. Their suitability to quantify cell size, intra- and extracellular spaces (ICS and ECS) and diffusivities are assessed. The proper modeling of tissue membrane permeability – hardly a newcomer in the field, but lacking applications - and its impact on microstructural estimates are also considered. After discussing general issues with tissue modeling and microstructural parameter estimation (i.e. fitting), potential solutions are detailed. The in vivo applications of this new, non-invasive, specific approach in cancer are reviewed, ranging from the characterization of gliomas in rodent brains and observation of time-dependence in breast tissue lesions and prostate cancer, to the recent preclinical evaluation of new treatments efficacy. It is expected that clinical applications of TDD will strongly benefit the community in terms of non-invasive cancer screening.
Uncertainty in temperature response of current consumption-based emissions estimates
NASA Astrophysics Data System (ADS)
Karstensen, J.; Peters, G. P.; Andrew, R. M.
2015-05-01
Several studies have connected emissions of greenhouse gases to economic and trade data to quantify the causal chain from consumption to emissions and climate change. These studies usually combine data and models originating from different sources, making it difficult to estimate uncertainties along the entire causal chain. We estimate uncertainties in economic data, multi-pollutant emission statistics, and metric parameters, and use Monte Carlo analysis to quantify contributions to uncertainty and to determine how uncertainty propagates to estimates of global temperature change from regional and sectoral territorial- and consumption-based emissions for the year 2007. We find that the uncertainties are sensitive to the emission allocations, mix of pollutants included, the metric and its time horizon, and the level of aggregation of the results. Uncertainties in the final results are largely dominated by the climate sensitivity and the parameters associated with the warming effects of CO2. Based on our assumptions, which exclude correlations in the economic data, the uncertainty in the economic data appears to have a relatively small impact on uncertainty at the national level in comparison to emissions and metric uncertainty. Much higher uncertainties are found at the sectoral level. Our results suggest that consumption-based national emissions are not significantly more uncertain than the corresponding production-based emissions since the largest uncertainties are due to metric and emissions which affect both perspectives equally. The two perspectives exhibit different sectoral uncertainties, due to changes of pollutant compositions. We find global sectoral consumption uncertainties in the range of ±10 to ±27 % using the Global Temperature Potential with a 50-year time horizon, with metric uncertainties dominating. National-level uncertainties are similar in both perspectives due to the dominance of CO2 over other pollutants. The consumption emissions of the top 10 emitting regions have a broad uncertainty range of ±9 to ±25 %, with metric and emission uncertainties contributing similarly. The absolute global temperature potential (AGTP) with a 50-year time horizon has much higher uncertainties, with considerable uncertainty overlap for regions and sectors, indicating that the ranking of countries is uncertain.
Efficient numerical simulation of heat storage in subsurface georeservoirs
NASA Astrophysics Data System (ADS)
Boockmeyer, A.; Bauer, S.
2015-12-01
The transition of the German energy market towards renewable energy sources, e.g. wind or solar power, requires energy storage technologies to compensate for their fluctuating production. Large amounts of energy could be stored in georeservoirs such as porous formations in the subsurface. One possibility here is to store heat with high temperatures of up to 90°C through borehole heat exchangers (BHEs) since more than 80 % of the total energy consumption in German households are used for heating and hot water supply. Within the ANGUS+ project potential environmental impacts of such heat storages are assessed and quantified. Numerical simulations are performed to predict storage capacities, storage cycle times, and induced effects. For simulation of these highly dynamic storage sites, detailed high-resolution models are required. We set up a model that accounts for all components of the BHE and verified it using experimental data. The model ensures accurate simulation results but also leads to large numerical meshes and thus high simulation times. In this work, we therefore present a numerical model for each type of BHE (single U, double U and coaxial) that reduces the number of elements and the simulation time significantly for use in larger scale simulations. The numerical model includes all BHE components and represents the temporal and spatial temperature distribution with an accuracy of less than 2% deviation from the fully discretized model. By changing the BHE geometry and using equivalent parameters, the simulation time is reduced by a factor of ~10 for single U-tube BHEs, ~20 for double U-tube BHEs and ~150 for coaxial BHEs. Results of a sensitivity study that quantify the effects of different design and storage formation parameters on temperature distribution and storage efficiency for heat storage using multiple BHEs are then shown. It is found that storage efficiency strongly depends on the number of BHEs composing the storage site, their distance and the cycle time. The temperature distribution is most sensitive to thermal conductivity of both borehole grouting and storage formation while storage efficiency is mainly controlled by the thermal conductivity of the storage formation.
Pal, Rahul; Yang, Jinping; Ortiz, Daniel; Qiu, Suimin; Resto, Vicente; McCammon, Susan; Vargas, Gracie
2015-01-01
The epithelial-connective tissue interface (ECTI) plays an integral role in epithelial neoplasia, including oral squamous cell carcinoma (OSCC). This interface undergoes significant alterations due to hyperproliferating epithelium that supports the transformation of normal epithelium to precancers and cancer. We present a method based on nonlinear optical microscopy to directly assess the ECTI and quantify dysplastic alterations using a hamster model for oral carcinogenesis. Neoplastic and non-neoplastic normal mucosa were imaged in-vivo by both multiphoton autofluorescence microscopy (MPAM) and second harmonic generation microscopy (SHGM) to obtain cross-sectional reconstructions of the oral epithelium and lamina propria. Imaged sites were biopsied and processed for histopathological grading and measurement of ECTI parameters. An ECTI shape parameter was calculated based on deviation from the linear geometry (ΔLinearity) seen in normal mucosa was measured using MPAM-SHGM and histology. The ECTI was readily visible in MPAM-SHGM and quantitative shape analysis showed ECTI deformation in dysplasia but not in normal mucosa. ΔLinearity was significantly (p < 0.01) higher in dysplasia (0.41±0.24) than normal (0.11±0.04) as measured in MPAM-SHGM and results were confirmed in histology which showed similar trends in ΔLinearity. Increase in ΔLinearity was also statistically significant for different grades of dysplasia. In-vivo ΔLinearity measurement alone from microscopy discriminated dysplasia from normal tissue with 87.9% sensitivity and 97.6% specificity, while calculations from histology provided 96.4% sensitivity and 85.7% specificity. Among other quantifiable architectural changes, a progressive statistically significant increase in epithelial thickness was seen with increasing grade of dysplasia. MPAM-SHGM provides new noninvasive ways for direct characterization of ECTI which may be used in preclinical studies to investigate the role of this interface in early transformation. Further development of the method may also lead to new diagnostic approaches to differentiate non-neoplastic tissue from precancers and neoplasia, possibly with other cellular and layer based indicators of abnormality.
Bernstein, Diana N.; Neelin, J. David
2016-04-28
A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme.more » This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive dangerous ranges. Here, the low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, Diana N.; Neelin, J. David
A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme.more » This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive dangerous ranges. Here, the low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.« less
Quantifying Transmission Heterogeneity Using Both Pathogen Phylogenies and Incidence Time Series
Li, Lucy M.; Grassly, Nicholas C.; Fraser, Christophe
2017-01-01
Abstract Heterogeneity in individual-level transmissibility can be quantified by the dispersion parameter k of the offspring distribution. Quantifying heterogeneity is important as it affects other parameter estimates, it modulates the degree of unpredictability of an epidemic, and it needs to be accounted for in models of infection control. Aggregated data such as incidence time series are often not sufficiently informative to estimate k. Incorporating phylogenetic analysis can help to estimate k concurrently with other epidemiological parameters. We have developed an inference framework that uses particle Markov Chain Monte Carlo to estimate k and other epidemiological parameters using both incidence time series and the pathogen phylogeny. Using the framework to fit a modified compartmental transmission model that includes the parameter k to simulated data, we found that more accurate and less biased estimates of the reproductive number were obtained by combining epidemiological and phylogenetic analyses. However, k was most accurately estimated using pathogen phylogeny alone. Accurately estimating k was necessary for unbiased estimates of the reproductive number, but it did not affect the accuracy of reporting probability and epidemic start date estimates. We further demonstrated that inference was possible in the presence of phylogenetic uncertainty by sampling from the posterior distribution of phylogenies. Finally, we used the inference framework to estimate transmission parameters from epidemiological and genetic data collected during a poliovirus outbreak. Despite the large degree of phylogenetic uncertainty, we demonstrated that incorporating phylogenetic data in parameter inference improved the accuracy and precision of estimates. PMID:28981709
Hartin, Corinne A.; Bond-Lamberty, Benjamin; Patel, Pralit; ...
2016-08-01
Continued oceanic uptake of anthropogenic CO 2 is projected to significantly alter the chemistry of the upper oceans over the next three centuries, with potentially serious consequences for marine ecosystems. Relatively few models have the capability to make projections of ocean acidification, limiting our ability to assess the impacts and probabilities of ocean changes. In this study we examine the ability of Hector v1.1, a reduced-form global model, to project changes in the upper ocean carbonate system over the next three centuries, and quantify the model's sensitivity to parametric inputs. Hector is run under prescribed emission pathways from the Representativemore » Concentration Pathways (RCPs) and compared to both observations and a suite of Coupled Model Intercomparison (CMIP5) model outputs. Current observations confirm that ocean acidification is already taking place, and CMIP5 models project significant changes occurring to 2300. Hector is consistent with the observational record within both the high- (> 55°) and low-latitude oceans (< 55°). The model projects low-latitude surface ocean pH to decrease from preindustrial levels of 8.17 to 7.77 in 2100, and to 7.50 in 2300; aragonite saturation levels (Ω Ar) decrease from 4.1 units to 2.2 in 2100 and 1.4 in 2300 under RCP 8.5. These magnitudes and trends of ocean acidification within Hector are largely consistent with the CMIP5 model outputs, although we identify some small biases within Hector's carbonate system. Of the parameters tested, changes in [H +] are most sensitive to parameters that directly affect atmospheric CO 2 concentrations – Q 10 (terrestrial respiration temperature response) as well as changes in ocean circulation, while changes in Ω Ar saturation levels are sensitive to changes in ocean salinity and Q 10. We conclude that Hector is a robust tool well suited for rapid ocean acidification projections and sensitivity analyses, and it is capable of emulating both current observations and large-scale climate models under multiple emission pathways.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartin, Corinne A.; Bond-Lamberty, Benjamin; Patel, Pralit
Continued oceanic uptake of anthropogenic CO 2 is projected to significantly alter the chemistry of the upper oceans over the next three centuries, with potentially serious consequences for marine ecosystems. Relatively few models have the capability to make projections of ocean acidification, limiting our ability to assess the impacts and probabilities of ocean changes. In this study we examine the ability of Hector v1.1, a reduced-form global model, to project changes in the upper ocean carbonate system over the next three centuries, and quantify the model's sensitivity to parametric inputs. Hector is run under prescribed emission pathways from the Representativemore » Concentration Pathways (RCPs) and compared to both observations and a suite of Coupled Model Intercomparison (CMIP5) model outputs. Current observations confirm that ocean acidification is already taking place, and CMIP5 models project significant changes occurring to 2300. Hector is consistent with the observational record within both the high- (> 55°) and low-latitude oceans (< 55°). The model projects low-latitude surface ocean pH to decrease from preindustrial levels of 8.17 to 7.77 in 2100, and to 7.50 in 2300; aragonite saturation levels (Ω Ar) decrease from 4.1 units to 2.2 in 2100 and 1.4 in 2300 under RCP 8.5. These magnitudes and trends of ocean acidification within Hector are largely consistent with the CMIP5 model outputs, although we identify some small biases within Hector's carbonate system. Of the parameters tested, changes in [H +] are most sensitive to parameters that directly affect atmospheric CO 2 concentrations – Q 10 (terrestrial respiration temperature response) as well as changes in ocean circulation, while changes in Ω Ar saturation levels are sensitive to changes in ocean salinity and Q 10. We conclude that Hector is a robust tool well suited for rapid ocean acidification projections and sensitivity analyses, and it is capable of emulating both current observations and large-scale climate models under multiple emission pathways.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartin, Corinne A.; Bond-Lamberty, Benjamin; Patel, Pralit
Continued oceanic uptake of anthropogenic CO 2 is projected to significantly alter the chemistry of the upper oceans over the next three centuries, with potentially serious consequences for marine ecosystems. Relatively few models have the capability to make projections of ocean acidification, limiting our ability to assess the impacts and probabilities of ocean changes. In this study we examine the ability of Hector v1.1, a reduced-form global model, to project changes in the upper ocean carbonate system over the next three centuries, and quantify the model's sensitivity to parametric inputs. Hector is run under prescribed emission pathways from the Representativemore » Concentration Pathways (RCPs) and compared to both observations and a suite of Coupled Model Intercomparison (CMIP5) model outputs. Current observations confirm that ocean acidification is already taking place, and CMIP5 models project significant changes occurring to 2300. Hector is consistent with the observational record within both the high- (> 55°) and low-latitude oceans (< 55°). The model projects low-latitude surface ocean pH to decrease from preindustrial levels of 8.17 to 7.77 in 2100, and to 7.50 in 2300; aragonite saturation levels (Ω Ar) decrease from 4.1 units to 2.2 in 2100 and 1.4 in 2300 under RCP 8.5. These magnitudes and trends of ocean acidification within Hector are largely consistent with the CMIP5 model outputs, although we identify some small biases within Hector's carbonate system. Of the parameters tested, changes in [H +] are most sensitive to parameters that directly affect atmospheric CO 2 concentrations – Q 10 (terrestrial respiration temperature response) as well as changes in ocean circulation, while changes in Ω Ar saturation levels are sensitive to changes in ocean salinity and Q 10. We conclude that Hector is a robust tool well suited for rapid ocean acidification projections and sensitivity analyses, and it is capable of emulating both current observations and large-scale climate models under multiple emission pathways.« less
Application of modern radiative transfer tools to model laboratory quartz emissivity
NASA Astrophysics Data System (ADS)
Pitman, Karly M.; Wolff, Michael J.; Clayton, Geoffrey C.
2005-08-01
Planetary remote sensing of regolith surfaces requires use of theoretical models for interpretation of constituent grain physical properties. In this work, we review and critically evaluate past efforts to strengthen numerical radiative transfer (RT) models with comparison to a trusted set of nadir incidence laboratory quartz emissivity spectra. By first establishing a baseline statistical metric to rate successful model-laboratory emissivity spectral fits, we assess the efficacy of hybrid computational solutions (Mie theory + numerically exact RT algorithm) to calculate theoretical emissivity values for micron-sized α-quartz particles in the thermal infrared (2000-200 cm-1) wave number range. We show that Mie theory, a widely used but poor approximation to irregular grain shape, fails to produce the single scattering albedo and asymmetry parameter needed to arrive at the desired laboratory emissivity values. Through simple numerical experiments, we show that corrections to single scattering albedo and asymmetry parameter values generated via Mie theory become more necessary with increasing grain size. We directly compare the performance of diffraction subtraction and static structure factor corrections to the single scattering albedo, asymmetry parameter, and emissivity for dense packing of grains. Through these sensitivity studies, we provide evidence that, assuming RT methods work well given sufficiently well-quantified inputs, assumptions about the scatterer itself constitute the most crucial aspect of modeling emissivity values.
Reliability analysis of a robotic system using hybridized technique
NASA Astrophysics Data System (ADS)
Kumar, Naveen; Komal; Lather, J. S.
2017-09-01
In this manuscript, the reliability of a robotic system has been analyzed using the available data (containing vagueness, uncertainty, etc). Quantification of involved uncertainties is done through data fuzzification using triangular fuzzy numbers with known spreads as suggested by system experts. With fuzzified data, if the existing fuzzy lambda-tau (FLT) technique is employed, then the computed reliability parameters have wide range of predictions. Therefore, decision-maker cannot suggest any specific and influential managerial strategy to prevent unexpected failures and consequently to improve complex system performance. To overcome this problem, the present study utilizes a hybridized technique. With this technique, fuzzy set theory is utilized to quantify uncertainties, fault tree is utilized for the system modeling, lambda-tau method is utilized to formulate mathematical expressions for failure/repair rates of the system, and genetic algorithm is utilized to solve established nonlinear programming problem. Different reliability parameters of a robotic system are computed and the results are compared with the existing technique. The components of the robotic system follow exponential distribution, i.e., constant. Sensitivity analysis is also performed and impact on system mean time between failures (MTBF) is addressed by varying other reliability parameters. Based on analysis some influential suggestions are given to improve the system performance.
NASA Astrophysics Data System (ADS)
Syed, N. H.; Rehman, A. A.; Hussain, D.; Ishaq, S.; Khan, A. A.
2017-11-01
Morphometric analysis is vital for any watershed investigation and it is inevitable for flood risk assessment in sub-watershed basins. Present study undertaken to carry out critical evaluation and assessment of sub watershed morphological parameters for flood risk assessment of Central Karakorum National Park (CKNP), where Geographical information system and remote sensing (GIS & RS) approach used for quantifying the parameter and mapping of sub watershed units. ASTER DEM used as a geo-spatial data for watershed delineation and stream network. Morphometric analysis carried out using spatial analyst tool of ArcGIS 10.2. The parameters included were bifurcation ratio (Rb), Drainage Texture (Rt), Circulatory ratio (Rc), Elongated ratio (Re), Drainage density (Dd), Stream Length (Lu), Stream order (Su), Slope and Basin length (Lb) have calculated separately. The analysis revealed that the stream order varies from order 1 to 6 and the total numbers of stream segments of all orders were 52. Multi criteria analysis process used to calculate the risk factor. As an accomplished result, map of sub watershed prioritization developed using weighted standardized risk factor. These results helped to understand sensitivity of flush floods in different sub watersheds of the study area and leaded to better management of the mountainous regions in prospect of flush floods.
NASA Astrophysics Data System (ADS)
Loiola, Rodrigo Azevedo; Dos Anjos, Fabyana Maria; Shimada, Ana Lúcia; Cruz, Wesley Soares; Drewes, Carine Cristiane; Rodrigues, Stephen Fernandes; Cardozo, Karina Helena Morais; Carvalho, Valdemir Melechco; Pinto, Ernani; Farsky, Sandra Helena
2016-06-01
It has been recently proposed that exposure to polychlorinated biphenyls (PCBs) is a risk factor to type 2 diabetes mellitus (DM2). We investigated this hypothesis using long-term in vivo PCB126 exposure to rats addressing metabolic, cellular and proteomic parameters. Male Wistar rats were exposed to PCB126 (0.1, 1 or 10 μg/kg of body weight/day; for 15 days) or vehicle by intranasal instillation. Systemic alterations were quantified by body weight, insulin and glucose tolerance, and blood biochemical profile. Pancreatic toxicity was measured by inflammatory parameters, cell viability and cycle, free radical generation, and proteomic profile on islets of Langerhans. In vivo PCB126 exposure enhanced the body weight gain, impaired insulin sensitivity, reduced adipose tissue deposit, and elevated serum triglycerides, cholesterol, and insulin levels. Inflammatory parameters in the pancreas and cell morphology, viability and cycle were not altered in islets of Langerhans. Nevertheless, in vivo PCB126 exposure increased free radical generation and modified the expression of proteins related to oxidative stress on islets of Langerhans, which are indicative of early β-cell failure. Data herein obtained show that long-term in vivo PCB126 exposure through intranasal route induced alterations on islets of Langerhans related to early end points of DM2.
Tuning to optimize SVM approach for assisting ovarian cancer diagnosis with photoacoustic imaging.
Wang, Rui; Li, Rui; Lei, Yanyan; Zhu, Quing
2015-01-01
Support vector machine (SVM) is one of the most effective classification methods for cancer detection. The efficiency and quality of a SVM classifier depends strongly on several important features and a set of proper parameters. Here, a series of classification analyses, with one set of photoacoustic data from ovarian tissues ex vivo and a widely used breast cancer dataset- the Wisconsin Diagnostic Breast Cancer (WDBC), revealed the different accuracy of a SVM classification in terms of the number of features used and the parameters selected. A pattern recognition system is proposed by means of SVM-Recursive Feature Elimination (RFE) with the Radial Basis Function (RBF) kernel. To improve the effectiveness and robustness of the system, an optimized tuning ensemble algorithm called as SVM-RFE(C) with correlation filter was implemented to quantify feature and parameter information based on cross validation. The proposed algorithm is first demonstrated outperforming SVM-RFE on WDBC. Then the best accuracy of 94.643% and sensitivity of 94.595% were achieved when using SVM-RFE(C) to test 57 new PAT data from 19 patients. The experiment results show that the classifier constructed with SVM-RFE(C) algorithm is able to learn additional information from new data and has significant potential in ovarian cancer diagnosis.
Risk Assessment of Bone Fracture During Space Exploration Missions to the Moon and Mars
NASA Technical Reports Server (NTRS)
Lewandowski, Beth E.; Myers, Jerry G.; Nelson, Emily S.; Licatta, Angelo; Griffin, Devon
2007-01-01
The possibility of a traumatic bone fracture in space is a concern due to the observed decrease in astronaut bone mineral density (BMD) during spaceflight and because of the physical demands of the mission. The Bone Fracture Risk Module (BFxRM) was developed to quantify the probability of fracture at the femoral neck and lumbar spine during space exploration missions. The BFxRM is scenario-based, providing predictions for specific activities or events during a particular space mission. The key elements of the BFxRM are the mission parameters, the biomechanical loading models, the bone loss and fracture models and the incidence rate of the activity or event. Uncertainties in the model parameters arise due to variations within the population and unknowns associated with the effects of the space environment. Consequently, parameter distributions were used in Monte Carlo simulations to obtain an estimate of fracture probability under real mission scenarios. The model predicts an increase in the probability of fracture as the mission length increases and fracture is more likely in the higher gravitational field of Mars than on the moon. The resulting probability predictions and sensitivity analyses of the BFxRM can be used as an engineering tool for mission operation and resource planning in order to mitigate the risk of bone fracture in space.
Finke, Kathrin; Schwarzkopf, Wolfgang; Müller, Ulrich; Frodl, Thomas; Müller, Hermann J; Schneider, Werner X; Engel, Rolf R; Riedel, Michael; Möller, Hans-Jürgen; Hennig-Fast, Kristina
2011-11-01
Attention deficit hyperactivity disorder (ADHD) persists frequently into adulthood. The decomposition of endophenotypes by means of experimental neuro-cognitive assessment has the potential to improve diagnostic assessment, evaluation of treatment response, and disentanglement of genetic and environmental influences. We assessed four parameters of attentional capacity and selectivity derived from simple psychophysical tasks (verbal report of briefly presented letter displays) and based on a "theory of visual attention." These parameters are mathematically independent, quantitative measures, and previous studies have shown that they are highly sensitive for subtle attention deficits. Potential reductions of attentional capacity, that is, of perceptual processing speed and working memory storage capacity, were assessed with a whole report paradigm. Furthermore, possible pathologies of attentional selectivity, that is, selection of task-relevant information and bias in the spatial distribution of attention, were measured with a partial report paradigm. A group of 30 unmedicated adult ADHD patients and a group of 30 demographically matched healthy controls were tested. ADHD patients showed significant reductions of working memory storage capacity of a moderate to large effect size. Perceptual processing speed, task-based, and spatial selection were unaffected. The results imply a working memory deficit as an important source of behavioral impairments. The theory of visual attention parameter working memory storage capacity might constitute a quantifiable and testable endophenotype of ADHD.
Risk Assessment of Bone Fracture During Space Exploration Missions to the Moon and Mars
NASA Technical Reports Server (NTRS)
Lewandowski, Beth E.; Myers, Jerry G.; Nelson, Emily S.; Griffin, Devon
2008-01-01
The possibility of a traumatic bone fracture in space is a concern due to the observed decrease in astronaut bone mineral density (BMD) during spaceflight and because of the physical demands of the mission. The Bone Fracture Risk Module (BFxRM) was developed to quantify the probability of fracture at the femoral neck and lumbar spine during space exploration missions. The BFxRM is scenario-based, providing predictions for specific activities or events during a particular space mission. The key elements of the BFxRM are the mission parameters, the biomechanical loading models, the bone loss and fracture models and the incidence rate of the activity or event. Uncertainties in the model parameters arise due to variations within the population and unknowns associated with the effects of the space environment. Consequently, parameter distributions were used in Monte Carlo simulations to obtain an estimate of fracture probability under real mission scenarios. The model predicts an increase in the probability of fracture as the mission length increases and fracture is more likely in the higher gravitational field of Mars than on the moon. The resulting probability predictions and sensitivity analyses of the BFxRM can be used as an engineering tool for mission operation and resource planning in order to mitigate the risk of bone fracture in space.
Local sensitivity analyses and identifiable parameter subsets were used to describe numerical constraints of a hypoxia model for bottom waters of the northern Gulf of Mexico. The sensitivity of state variables differed considerably with parameter changes, although most variables ...
Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...
Novel fiber optic-based needle redox imager for cancer diagnosis
NASA Astrophysics Data System (ADS)
Kanniyappan, Udayakumar; Xu, He N.; Tang, Qinggong; Gaitan, Brandon; Liu, Yi; Li, Lin Z.; Chen, Yu
2018-02-01
Despite various technological advancements in cancer diagnosis, the mortality rates were not decreased significantly. We aim to develop a novel optical imaging tool to assist cancer diagnosis effectively. Fluorescence spectroscopy/imaging is a fast, rapid, and minimally invasive technique which has been successfully applied to diagnosing cancerous cells/tissues. Recently, the ratiometric imaging of intrinsic fluorescence of reduced nicotinamide adenine dinucleotide (NADH) and flavin adenine dinucleotide (FAD), as pioneered by Britton Chance and the co-workers in 1950-70's, has gained much attention to quantify the physiological parameters of living cells/tissues. The redox ratio, i.e., FAD/(FAD+NADH) or FAD/NADH, has been shown to be sensitive to various metabolic changes in in vivo and in vitro cells/tissues. Optical redox imaging has also been investigated for providing potential imaging biomarkers for cancer transformation, aggressiveness, and treatment response. Towards this goal, we have designed and developed a novel fiberoptic-based needle redox imager (NRI) that can fit into an 11G clinical coaxial biopsy needle for real time imaging during clinical cancer surgery. In the present study, the device is calibrated with tissue mimicking phantoms of FAD and NADH along with various technical parameters such as sensitivity, dynamic range, linearity, and spatial resolution of the system. We also conducted preliminary imaging of tissues ex vivo for validation. We plan to test the NRI on clinical breast cancer patients. Once validated this device may provide an effective tool for clinical cancer diagnosis.
Genetic Complexity and Quantitative Trait Loci Mapping of Yeast Morphological Traits
Nogami, Satoru; Ohya, Yoshikazu; Yvert, Gaël
2007-01-01
Functional genomics relies on two essential parameters: the sensitivity of phenotypic measures and the power to detect genomic perturbations that cause phenotypic variations. In model organisms, two types of perturbations are widely used. Artificial mutations can be introduced in virtually any gene and allow the systematic analysis of gene function via mutants fitness. Alternatively, natural genetic variations can be associated to particular phenotypes via genetic mapping. However, the access to genome manipulation and breeding provided by model organisms is sometimes counterbalanced by phenotyping limitations. Here we investigated the natural genetic diversity of Saccharomyces cerevisiae cellular morphology using a very sensitive high-throughput imaging platform. We quantified 501 morphological parameters in over 50,000 yeast cells from a cross between two wild-type divergent backgrounds. Extensive morphological differences were found between these backgrounds. The genetic architecture of the traits was complex, with evidence of both epistasis and transgressive segregation. We mapped quantitative trait loci (QTL) for 67 traits and discovered 364 correlations between traits segregation and inheritance of gene expression levels. We validated one QTL by the replacement of a single base in the genome. This study illustrates the natural diversity and complexity of cellular traits among natural yeast strains and provides an ideal framework for a genetical genomics dissection of multiple traits. Our results did not overlap with results previously obtained from systematic deletion strains, showing that both approaches are necessary for the functional exploration of genomes. PMID:17319748
NASA Astrophysics Data System (ADS)
Vargo, L. J.; Galewsky, J.; Rupper, S.; Ward, D. J.
2018-04-01
The subtropical Andes (18.5-27 °S) have been glaciated in the past, but are presently glacier-free. We use idealized model experiments to quantify glacier sensitivity to changes in climate in order to investigate the climatic drivers of past glaciations. We quantify the equilibrium line altitude (ELA) sensitivity (the change in ELA per change in climate) to temperature, precipitation, and shortwave radiation for three distinct climatic regions in the subtropical Andes. We find that in the western cordillera, where conditions are hyper-arid with the highest solar radiation on Earth, ELA sensitivity is as high as 34 m per % increase in precipitation, and 70 m per % decrease in shortwave radiation. This is compared with the eastern cordillera, where precipitation is the highest of the three regions, and ELA sensitivity is only 10 m per % increase in precipitation, and 25 m per % decrease in shortwave radiation. The high ELA sensitivity to shortwave radiation highlights the influence of radiation on mass balance of high elevation and low-latitude glaciers. We also consider these quantified ELA sensitivities in context of previously dated glacial deposits from the regions. Our results suggest that glaciation of the humid eastern cordillera was driven primarily by lower temperatures, while glaciations of the arid Altiplano and western cordillera were also influenced by increases in precipitation and decreases in shortwave radiation. Using paleoclimate records from the timing of glaciation, we find that glaciation of the hyper-arid western cordillera can be explained by precipitation increases of 90-160% (1.9-2.6× higher than modern), in conjunction with associated decreases in shortwave radiation of 7-12% and in temperature of 3.5 °C.
Knotts, Thomas A.
2017-01-01
Molecular simulation has the ability to predict various physical properties that are difficult to obtain experimentally. For example, we implement molecular simulation to predict the critical constants (i.e., critical temperature, critical density, critical pressure, and critical compressibility factor) for large n-alkanes that thermally decompose experimentally (as large as C48). Historically, molecular simulation has been viewed as a tool that is limited to providing qualitative insight. One key reason for this perceived weakness in molecular simulation is the difficulty to quantify the uncertainty in the results. This is because molecular simulations have many sources of uncertainty that propagate and are difficult to quantify. We investigate one of the most important sources of uncertainty, namely, the intermolecular force field parameters. Specifically, we quantify the uncertainty in the Lennard-Jones (LJ) 12-6 parameters for the CH4, CH3, and CH2 united-atom interaction sites. We then demonstrate how the uncertainties in the parameters lead to uncertainties in the saturated liquid density and critical constant values obtained from Gibbs Ensemble Monte Carlo simulation. Our results suggest that the uncertainties attributed to the LJ 12-6 parameters are small enough that quantitatively useful estimates of the saturated liquid density and the critical constants can be obtained from molecular simulation. PMID:28527455
Evaluation and uncertainty analysis of regional-scale CLM4.5 net carbon flux estimates
NASA Astrophysics Data System (ADS)
Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry
2018-01-01
Modeling net ecosystem exchange (NEE) at the regional scale with land surface models (LSMs) is relevant for the estimation of regional carbon balances, but studies on it are very limited. Furthermore, it is essential to better understand and quantify the uncertainty of LSMs in order to improve them. An important key variable in this respect is the prognostic leaf area index (LAI), which is very sensitive to forcing data and strongly affects the modeled NEE. We applied the Community Land Model (CLM4.5-BGC) to the Rur catchment in western Germany and compared estimated and default ecological key parameters for modeling carbon fluxes and LAI. The parameter estimates were previously estimated with the Markov chain Monte Carlo (MCMC) approach DREAM(zs) for four of the most widespread plant functional types in the catchment. It was found that the catchment-scale annual NEE was strongly positive with default parameter values but negative (and closer to observations) with the estimated values. Thus, the estimation of CLM parameters with local NEE observations can be highly relevant when determining regional carbon balances. To obtain a more comprehensive picture of model uncertainty, CLM ensembles were set up with perturbed meteorological input and uncertain initial states in addition to uncertain parameters. C3 grass and C3 crops were particularly sensitive to the perturbed meteorological input, which resulted in a strong increase in the standard deviation of the annual NEE sum (σ
Ely, D. Matthew
2006-01-01
Recharge is a vital component of the ground-water budget and methods for estimating it range from extremely complex to relatively simple. The most commonly used techniques, however, are limited by the scale of application. One method that can be used to estimate ground-water recharge includes process-based models that compute distributed water budgets on a watershed scale. These models should be evaluated to determine which model parameters are the dominant controls in determining ground-water recharge. Seven existing watershed models from different humid regions of the United States were chosen to analyze the sensitivity of simulated recharge to model parameters. Parameter sensitivities were determined using a nonlinear regression computer program to generate a suite of diagnostic statistics. The statistics identify model parameters that have the greatest effect on simulated ground-water recharge and that compare and contrast the hydrologic system responses to those parameters. Simulated recharge in the Lost River and Big Creek watersheds in Washington State was sensitive to small changes in air temperature. The Hamden watershed model in west-central Minnesota was developed to investigate the relations that wetlands and other landscape features have with runoff processes. Excess soil moisture in the Hamden watershed simulation was preferentially routed to wetlands, instead of to the ground-water system, resulting in little sensitivity of any parameters to recharge. Simulated recharge in the North Fork Pheasant Branch watershed, Wisconsin, demonstrated the greatest sensitivity to parameters related to evapotranspiration. Three watersheds were simulated as part of the Model Parameter Estimation Experiment (MOPEX). Parameter sensitivities for the MOPEX watersheds, Amite River, Louisiana and Mississippi, English River, Iowa, and South Branch Potomac River, West Virginia, were similar and most sensitive to small changes in air temperature and a user-defined flow routing parameter. Although the primary objective of this study was to identify, by geographic region, the importance of the parameter value to the simulation of ground-water recharge, the secondary objectives proved valuable for future modeling efforts. The value of a rigorous sensitivity analysis can (1) make the calibration process more efficient, (2) guide additional data collection, (3) identify model limitations, and (4) explain simulated results.
Saliba, Christopher M; Brandon, Scott C E; Deluzio, Kevin J
2017-05-24
Musculoskeletal models are increasingly used to estimate medial and lateral knee contact forces, which are difficult to measure in vivo. The sensitivity of contact force predictions to modeling parameters is important to the interpretation and implication of results generated by the model. The purpose of this study was to quantify the sensitivity of knee contact force predictions to simultaneous errors in frontal plane knee alignment and contact locations under different dynamic conditions. We scaled a generic musculoskeletal model for N=23 subjects' stature and radiographic knee alignment, then perturbed frontal plane alignment and mediolateral contact locations within experimentally-possible ranges of 10° to -10° and 10 to -10mm, respectively. The sensitivity of first peak, second peak, and mean medial and lateral knee contact forces to knee adduction angle and contact locations was modeled using linear regression. Medial loads increased, and lateral loads decreased, by between 3% and 6% bodyweight for each degree of varus perturbation. Shifting the medial contact point medially increased medial loads and decreased lateral loads by between 1% and 4% bodyweight per millimeter. This study demonstrates that realistic measurement errors of 5mm (contact distance) or 5° (frontal plane alignment) could result in a combined 50% BW error in subject specific contact force estimates. We also show that model sensitivity varies between subjects as a result of differences in gait dynamics. These results demonstrate that predicted knee joint contact forces should be considered as a range of possible values determined by model uncertainty. Copyright © 2017 Elsevier Ltd. All rights reserved.
Socio-climatic Exposure of an Afghan Poppy Farmer
NASA Astrophysics Data System (ADS)
Mankin, J. S.; Diffenbaugh, N. S.
2011-12-01
Many posit that climate impacts from anthropogenic greenhouse gas emissions will have consequences for the natural and agricultural systems on which humans rely for food, energy, and livelihoods, and therefore, on stability and human security. However, many of the potential mechanisms of action in climate impacts and human systems response, as well as the differential vulnerabilities of such systems, remain underexplored and unquantified. Here I present two initial steps necessary to characterize and quantify the consequences of climate change for farmer livelihood in Afghanistan, given both climate impacts and farmer vulnerabilities. The first is a conceptual model mapping the potential relationships between Afghanistan's climate, the winter agricultural season, and the country's political economy of violence and instability. The second is a utility-based decision model for assessing farmer response sensitivity to various climate impacts based on crop sensitivities. A farmer's winter planting decision can be modeled roughly as a tradeoff between cultivating the two crops that dominate the winter growing season-opium poppy (a climate tolerant cash crop) and wheat (a climatically vulnerable crop grown for household consumption). Early sensitivity analysis results suggest that wheat yield dominates farmer decision making variability; however, such initial results may dependent on the relative parameter ranges of wheat and poppy yields. Importantly though, the variance in Afghanistan's winter harvest yields of poppy and wheat is tightly linked to household livelihood and thus, is indirectly connected to the wider instability and insecurity within the country. This initial analysis motivates my focused research on the sensitivity of these crops to climate variability in order to project farmer well-being and decision sensitivity in a warmer world.
Profile-likelihood Confidence Intervals in Item Response Theory Models.
Chalmers, R Philip; Pek, Jolynn; Liu, Yang
2017-01-01
Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.
Sweeten, Sara E.; Ford, W. Mark
2016-01-01
Large-scale coal mining practices, particularly surface coal extraction and associated valley fills as well as residential wastewater discharge, are of ecological concern for aquatic systems in central Appalachia. Identifying and quantifying alterations to ecosystems along a gradient of spatial scales is a necessary first-step to aid in mitigation of negative consequences to aquatic biota. In central Appalachian headwater streams, apart from fish, salamanders are the most abundant vertebrate predator that provide a significant intermediate trophic role linking aquatic and terrestrial food webs. Stream salamander species are considered to be sensitive to aquatic stressors and environmental alterations, as past research has shown linkages among microhabitat parameters, large-scale land use such as urbanization and logging, and salamander abundances. However, there is little information examining these relationships between environmental conditions and salamander occupancy in the coalfields of central Appalachia. In the summer of 2013, 70 sites (sampled two to three times each) in the southwest Virginia coalfields were visited to collect salamanders and quantify stream and riparian microhabitat parameters. Using an information-theoretic framework, effects of microhabitat and large-scale land use on stream salamander occupancy were compared. The findings indicate that Desmognathus spp. occupancy rates are more correlated to microhabitat parameters such as canopy cover than to large-scale land uses. However, Eurycea spp. occupancy rates had a strong association with large-scale land uses, particularly recent mining and forest cover within the watershed. These findings suggest that protection of riparian habitats is an important consideration for maintaining aquatic systems in central Appalachia. If this is not possible, restoration riparian areas should follow guidelines using quick-growing tree species that are native to Appalachian riparian areas. These types of trees would rapidly establish a canopy cover, stabilize the soil, and impede invasive plant species which would, in turn, provide high-quality refuges for stream salamanders.