Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
NASA Technical Reports Server (NTRS)
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty
Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.
2016-09-12
Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less
Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.
Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2016-12-01
Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
NASA Astrophysics Data System (ADS)
García-Moreno, Angel-Iván; González-Barbosa, José-Joel; Ramírez-Pedraza, Alfonso; Hurtado-Ramos, Juan B.; Ornelas-Rodriguez, Francisco-Javier
2016-04-01
Computer-based reconstruction models can be used to approximate urban environments. These models are usually based on several mathematical approximations and the usage of different sensors, which implies dependency on many variables. The sensitivity analysis presented in this paper is used to weigh the relative importance of each uncertainty contributor into the calibration of a panoramic camera-LiDAR system. Both sensors are used for three-dimensional urban reconstruction. Simulated and experimental tests were conducted. For the simulated tests we analyze and compare the calibration parameters using the Monte Carlo and Latin hypercube sampling techniques. Sensitivity analysis for each variable involved into the calibration was computed by the Sobol method, which is based on the analysis of the variance breakdown, and the Fourier amplitude sensitivity test method, which is based on Fourier's analysis. Sensitivity analysis is an essential tool in simulation modeling and for performing error propagation assessments.
Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria
NASA Astrophysics Data System (ADS)
Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong
2017-08-01
In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.
Bruce G. Marcot; Peter H. Singleton; Nathan H. Schumaker
2015-01-01
Sensitivity analysisâdetermination of how prediction variables affect response variablesâof individual-based models (IBMs) are few but important to the interpretation of model output. We present sensitivity analysis of a spatially explicit IBM (HexSim) of a threatened species, the Northern Spotted Owl (NSO; Strix occidentalis caurina) in Washington...
SensA: web-based sensitivity analysis of SBML models.
Floettmann, Max; Uhlendorf, Jannis; Scharp, Till; Klipp, Edda; Spiesser, Thomas W
2014-10-01
SensA is a web-based application for sensitivity analysis of mathematical models. The sensitivity analysis is based on metabolic control analysis, computing the local, global and time-dependent properties of model components. Interactive visualization facilitates interpretation of usually complex results. SensA can contribute to the analysis, adjustment and understanding of mathematical models for dynamic systems. SensA is available at http://gofid.biologie.hu-berlin.de/ and can be used with any modern browser. The source code can be found at https://bitbucket.org/floettma/sensa/ (MIT license) © The Author 2014. Published by Oxford University Press.
We present a multi-faceted sensitivity analysis of a spatially explicit, individual-based model (IBM) (HexSim) of a threatened species, the Northern Spotted Owl (Strix occidentalis caurina) on a national forest in Washington, USA. Few sensitivity analyses have been conducted on ...
A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China
NASA Astrophysics Data System (ADS)
Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.
2016-12-01
Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin
2016-04-01
Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
USDA-ARS?s Scientific Manuscript database
This paper provides an overview of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) software application, an open-source, Java-based toolbox of visual and numerical analysis components for the evaluation of environmental models. MOUSE is based on the OPTAS model calibration syst...
Simulation-based sensitivity analysis for non-ignorably missing data.
Yin, Peng; Shi, Jian Q
2017-01-01
Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.
NASA Astrophysics Data System (ADS)
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2017-12-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.
A comparison of analysis methods to estimate contingency strength.
Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T
2018-05-09
To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.
Diagnostic value of highly-sensitive chimerism analysis after allogeneic stem cell transplantation.
Sellmann, Lea; Rabe, Kim; Bünting, Ivonne; Dammann, Elke; Göhring, Gudrun; Ganser, Arnold; Stadler, Michael; Weissinger, Eva M; Hambach, Lothar
2018-05-02
Conventional analysis of host chimerism (HC) frequently fails to detect relapse before its clinical manifestation in patients with hematological malignancies after allogeneic stem cell transplantation (allo-SCT). Quantitative PCR (qPCR)-based highly-sensitive chimerism analysis extends the detection limit of conventional (short tandem repeats-based) chimerism analysis from 1 to 0.01% host cells in whole blood. To date, the diagnostic value of highly-sensitive chimerism analysis is hardly defined. Here, we applied qPCR-based chimerism analysis to 901 blood samples of 71 out-patients with hematological malignancies after allo-SCT. Receiver operating characteristics (ROC) curves were calculated for absolute HC values and for the increments of HC before relapse. Using the best cut-offs, relapse was detected with sensitivities of 74 or 85% and specificities of 69 or 75%, respectively. Positive predictive values (PPVs) were only 12 or 18%, but the respective negative predictive values were 98 or 99%. Relapse was detected median 38 or 45 days prior to clinical diagnosis, respectively. Considering also durations of steadily increasing HC of more than 28 days improved PPVs to more than 28 or 59%, respectively. Overall, highly-sensitive chimerism analysis excludes relapses with high certainty and predicts relapses with high sensitivity and specificity more than a month prior to clinical diagnosis.
Estimating Sobol Sensitivity Indices Using Correlations
Sensitivity analysis is a crucial tool in the development and evaluation of complex mathematical models. Sobol's method is a variance-based global sensitivity analysis technique that has been applied to computational models to assess the relative importance of input parameters on...
Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide
Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...
2017-03-01
The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less
Global sensitivity analysis of DRAINMOD-FOREST, an integrated forest ecosystem model
Shiying Tian; Mohamed A. Youssef; Devendra M. Amatya; Eric D. Vance
2014-01-01
Global sensitivity analysis is a useful tool to understand process-based ecosystem models by identifying key parameters and processes controlling model predictions. This study reported a comprehensive global sensitivity analysis for DRAINMOD-FOREST, an integrated model for simulating water, carbon (C), and nitrogen (N) cycles and plant growth in lowland forests. The...
Development of a sensitivity analysis technique for multiloop flight control systems
NASA Technical Reports Server (NTRS)
Vaillard, A. H.; Paduano, J.; Downing, D. R.
1985-01-01
This report presents the development and application of a sensitivity analysis technique for multiloop flight control systems. This analysis yields very useful information on the sensitivity of the relative-stability criteria of the control system, with variations or uncertainties in the system and controller elements. The sensitivity analysis technique developed is based on the computation of the singular values and singular-value gradients of a feedback-control system. The method is applicable to single-input/single-output as well as multiloop continuous-control systems. Application to sampled-data systems is also explored. The sensitivity analysis technique was applied to a continuous yaw/roll damper stability augmentation system of a typical business jet, and the results show that the analysis is very useful in determining the system elements which have the largest effect on the relative stability of the closed-loop system. As a secondary product of the research reported here, the relative stability criteria based on the concept of singular values were explored.
NASA Astrophysics Data System (ADS)
Luo, Jiannan; Lu, Wenxi
2014-06-01
Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.
Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu
2006-11-01
Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 2. Application
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Based on the theoretical framework for sensitivity analysis called "Variogram Analysis of Response Surfaces" (VARS), developed in the companion paper, we develop and implement a practical "star-based" sampling strategy (called STAR-VARS), for the application of VARS to real-world problems. We also develop a bootstrap approach to provide confidence level estimates for the VARS sensitivity metrics and to evaluate the reliability of inferred factor rankings. The effectiveness, efficiency, and robustness of STAR-VARS are demonstrated via two real-data hydrological case studies (a 5-parameter conceptual rainfall-runoff model and a 45-parameter land surface scheme hydrology model), and a comparison with the "derivative-based" Morris and "variance-based" Sobol approaches are provided. Our results show that STAR-VARS provides reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being 1-2 orders of magnitude more efficient than the Morris or Sobol approaches.
Pasta, D J; Taylor, J L; Henning, J M
1999-01-01
Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2015-05-01
Sensitivity analysis is an essential paradigm in Earth and Environmental Systems modeling. However, the term "sensitivity" has a clear definition, based in partial derivatives, only when specified locally around a particular point (e.g., optimal solution) in the problem space. Accordingly, no unique definition exists for "global sensitivity" across the problem space, when considering one or more model responses to different factors such as model parameters or forcings. A variety of approaches have been proposed for global sensitivity analysis, based on different philosophies and theories, and each of these formally characterizes a different "intuitive" understanding of sensitivity. These approaches focus on different properties of the model response at a fundamental level and may therefore lead to different (even conflicting) conclusions about the underlying sensitivities. Here we revisit the theoretical basis for sensitivity analysis, summarize and critically evaluate existing approaches in the literature, and demonstrate their flaws and shortcomings through conceptual examples. We also demonstrate the difficulty involved in interpreting "global" interaction effects, which may undermine the value of existing interpretive approaches. With this background, we identify several important properties of response surfaces that are associated with the understanding and interpretation of sensitivities in the context of Earth and Environmental System models. Finally, we highlight the need for a new, comprehensive framework for sensitivity analysis that effectively characterizes all of the important sensitivity-related properties of model response surfaces.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.; Korivi, Vamshi M.
1991-01-01
A gradient-based design optimization strategy for practical aerodynamic design applications is presented, which uses the 2D thin-layer Navier-Stokes equations. The strategy is based on the classic idea of constructing different modules for performing the major tasks such as function evaluation, function approximation and sensitivity analysis, mesh regeneration, and grid sensitivity analysis, all driven and controlled by a general-purpose design optimization program. The accuracy of aerodynamic shape sensitivity derivatives is validated on two viscous test problems: internal flow through a double-throat nozzle and external flow over a NACA 4-digit airfoil. A significant improvement in aerodynamic performance has been achieved in both cases. Particular attention is given to a consistent treatment of the boundary conditions in the calculation of the aerodynamic sensitivity derivatives for the classic problems of external flow over an isolated lifting airfoil on 'C' or 'O' meshes.
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).
Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy
NASA Astrophysics Data System (ADS)
Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng
2018-06-01
To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2015-12-01
Earth and environmental systems models (EESMs) are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. Complexity and dimensionality are manifested by introducing many different factors in EESMs (i.e., model parameters, forcings, boundary conditions, etc.) to be identified. Sensitivity Analysis (SA) provides an essential means for characterizing the role and importance of such factors in producing the model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to 'variogram analysis', that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are limiting cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Sensitivity Analysis of Multicriteria Choice to Changes in Intervals of Value Tradeoffs
NASA Astrophysics Data System (ADS)
Podinovski, V. V.
2018-03-01
An approach to sensitivity (stability) analysis of nondominated alternatives to changes in the bounds of intervals of value tradeoffs, where the alternatives are selected based on interval data of criteria tradeoffs is proposed. Methods of computations for the analysis of sensitivity of individual nondominated alternatives and the set of such alternatives as a whole are developed.
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi
1994-01-01
An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.
Xi, Qing; Li, Zhao-Fu; Luo, Chuan
2014-05-01
Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-01-01
Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-08-15
It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.
USE OF SENSITIVITY ANALYSIS ON A PHYSIOLOGICALLY BASED PHARMACOKINETIC (PBPK) MODEL FOR CHLOROFORM IN RATS TO DETERMINE AGE-RELATED TOXICITY.
CR Eklund, MV Evans, and JE Simmons. US EPA, ORD, NHEERL, ETD,PKB, Research Triangle Park, NC.
Chloroform (CHCl3) is a disinfec...
Optimization of Parameter Ranges for Composite Tape Winding Process Based on Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Yu, Tao; Shi, Yaoyao; He, Xiaodong; Kang, Chao; Deng, Bo; Song, Shibo
2017-08-01
This study is focus on the parameters sensitivity of winding process for composite prepreg tape. The methods of multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis are proposed. The polynomial empirical model of interlaminar shear strength is established by response surface experimental method. Using this model, the relative sensitivity of key process parameters including temperature, tension, pressure and velocity is calculated, while the single-parameter sensitivity curves are obtained. According to the analysis of sensitivity curves, the stability and instability range of each parameter are recognized. Finally, the optimization method of winding process parameters is developed. The analysis results show that the optimized ranges of the process parameters for interlaminar shear strength are: temperature within [100 °C, 150 °C], tension within [275 N, 387 N], pressure within [800 N, 1500 N], and velocity within [0.2 m/s, 0.4 m/s], respectively.
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 1. Theory
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Computer simulation models are continually growing in complexity with increasingly more factors to be identified. Sensitivity Analysis (SA) provides an essential means for understanding the role and importance of these factors in producing model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to "variogram analysis," that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. Synthetic functions that resemble actual model response surfaces are used to illustrate the concepts, and show VARS to be as much as two orders of magnitude more computationally efficient than the state-of-the-art Sobol approach. In a companion paper, we propose a practical implementation strategy, and demonstrate the effectiveness, efficiency, and reliability (robustness) of the VARS framework on real-data case studies.
Bialosky, Joel E.; Robinson, Michael E.
2014-01-01
Background Cluster analysis can be used to identify individuals similar in profile based on response to multiple pain sensitivity measures. There are limited investigations into how empirically derived pain sensitivity subgroups influence clinical outcomes for individuals with spine pain. Objective The purposes of this study were: (1) to investigate empirically derived subgroups based on pressure and thermal pain sensitivity in individuals with spine pain and (2) to examine subgroup influence on 2-week clinical pain intensity and disability outcomes. Design A secondary analysis of data from 2 randomized trials was conducted. Methods Baseline and 2-week outcome data from 157 participants with low back pain (n=110) and neck pain (n=47) were examined. Participants completed demographic, psychological, and clinical information and were assessed using pain sensitivity protocols, including pressure (suprathreshold pressure pain) and thermal pain sensitivity (thermal heat threshold and tolerance, suprathreshold heat pain, temporal summation). A hierarchical agglomerative cluster analysis was used to create subgroups based on pain sensitivity responses. Differences in data for baseline variables, clinical pain intensity, and disability were examined. Results Three pain sensitivity cluster groups were derived: low pain sensitivity, high thermal static sensitivity, and high pressure and thermal dynamic sensitivity. There were differences in the proportion of individuals meeting a 30% change in pain intensity, where fewer individuals within the high pressure and thermal dynamic sensitivity group (adjusted odds ratio=0.3; 95% confidence interval=0.1, 0.8) achieved successful outcomes. Limitations Only 2-week outcomes are reported. Conclusions Distinct pain sensitivity cluster groups for individuals with spine pain were identified, with the high pressure and thermal dynamic sensitivity group showing worse clinical outcome for pain intensity. Future studies should aim to confirm these findings. PMID:24764070
System parameter identification from projection of inverse analysis
NASA Astrophysics Data System (ADS)
Liu, K.; Law, S. S.; Zhu, X. Q.
2017-05-01
The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.
Ageing of Insensitive DNAN Based Melt-Cast Explosives
2014-08-01
diurnal cycle (representative of the MEAO climate). Analysis of the ingredient composition, sensitiveness, mechanical and thermal properties was...first test condition was chosen to provide a worst-case scenario. Analysis of the ingredient composition, theoretical maximum density, sensitiveness...5 4.1.1 ARX-4027 Ingredient Analysis .............................................................. 5 4.1.2 ARX-4028 Ingredient Analysis
Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.
2009-01-01
We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.
NASA Astrophysics Data System (ADS)
Lin, Y.; Li, W. J.; Yu, J.; Wu, C. Z.
2018-04-01
Remote sensing technology is of significant advantages for monitoring and analysing ecological environment. By using of automatic extraction algorithm, various environmental resources information of tourist region can be obtained from remote sensing imagery. Combining with GIS spatial analysis and landscape pattern analysis, relevant environmental information can be quantitatively analysed and interpreted. In this study, taking the Chaohu Lake Basin as an example, Landsat-8 multi-spectral satellite image of October 2015 was applied. Integrated the automatic ELM (Extreme Learning Machine) classification results with the data of digital elevation model and slope information, human disturbance degree, land use degree, primary productivity, landscape evenness , vegetation coverage, DEM, slope and normalized water body index were used as the evaluation factors to construct the eco-sensitivity evaluation index based on AHP and overlay analysis. According to the value of eco-sensitivity evaluation index, by using of GIS technique of equal interval reclassification, the Chaohu Lake area was divided into four grades: very sensitive area, sensitive area, sub-sensitive areas and insensitive areas. The results of the eco-sensitivity analysis shows: the area of the very sensitive area was 4577.4378 km2, accounting for about 37.12 %, the sensitive area was 5130.0522 km2, accounting for about 37.12 %; the area of sub-sensitive area was 3729.9312 km2, accounting for 26.99 %; the area of insensitive area was 382.4399 km2, accounting for about 2.77 %. At the same time, it has been found that there were spatial differences in ecological sensitivity of the Chaohu Lake basin. The most sensitive areas were mainly located in the areas with high elevation and large terrain gradient. Insensitive areas were mainly distributed in slope of the slow platform area; the sensitive areas and the sub-sensitive areas were mainly agricultural land and woodland. Through the eco-sensitivity analysis of the study area, the automatic recognition and analysis techniques for remote sensing imagery are integrated into the ecological analysis and ecological regional planning, which can provide a reliable scientific basis for rational planning and regional sustainable development of the Chaohu Lake tourist area.
Adjoint-based sensitivity analysis of low-order thermoacoustic networks using a wave-based approach
NASA Astrophysics Data System (ADS)
Aguilar, José G.; Magri, Luca; Juniper, Matthew P.
2017-07-01
Strict pollutant emission regulations are pushing gas turbine manufacturers to develop devices that operate in lean conditions, with the downside that combustion instabilities are more likely to occur. Methods to predict and control unstable modes inside combustion chambers have been developed in the last decades but, in some cases, they are computationally expensive. Sensitivity analysis aided by adjoint methods provides valuable sensitivity information at a low computational cost. This paper introduces adjoint methods and their application in wave-based low order network models, which are used as industrial tools, to predict and control thermoacoustic oscillations. Two thermoacoustic models of interest are analyzed. First, in the zero Mach number limit, a nonlinear eigenvalue problem is derived, and continuous and discrete adjoint methods are used to obtain the sensitivities of the system to small modifications. Sensitivities to base-state modification and feedback devices are presented. Second, a more general case with non-zero Mach number, a moving flame front and choked outlet, is presented. The influence of the entropy waves on the computed sensitivities is shown.
Dynamic Modeling of Cell-Free Biochemical Networks Using Effective Kinetic Models
2015-03-16
sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity Analysis of the Reduced Order Coagulation...sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the performance of the reduced order model [69]. We...Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong
2016-02-01
Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.
How often do sensitivity analyses for economic parameters change cost-utility analysis conclusions?
Schackman, Bruce R; Gold, Heather Taffet; Stone, Patricia W; Neumann, Peter J
2004-01-01
There is limited evidence about the extent to which sensitivity analysis has been used in the cost-effectiveness literature. Sensitivity analyses for health-related QOL (HR-QOL), cost and discount rate economic parameters are of particular interest because they measure the effects of methodological and estimation uncertainties. To investigate the use of sensitivity analyses in the pharmaceutical cost-utility literature in order to test whether a change in economic parameters could result in a different conclusion regarding the cost effectiveness of the intervention analysed. Cost-utility analyses of pharmaceuticals identified in a prior comprehensive audit (70 articles) were reviewed and further audited. For each base case for which sensitivity analyses were reported (n = 122), up to two sensitivity analyses for HR-QOL (n = 133), cost (n = 99), and discount rate (n = 128) were examined. Article mentions of thresholds for acceptable cost-utility ratios were recorded (total 36). Cost-utility ratios were denominated in US dollars for the year reported in each of the original articles in order to determine whether a different conclusion would have been indicated at the time the article was published. Quality ratings from the original audit for articles where sensitivity analysis results crossed the cost-utility ratio threshold above the base-case result were compared with those that did not. The most frequently mentioned cost-utility thresholds were $US20,000/QALY, $US50,000/QALY, and $US100,000/QALY. The proportions of sensitivity analyses reporting quantitative results that crossed the threshold above the base-case results (or where the sensitivity analysis result was dominated) were 31% for HR-QOL sensitivity analyses, 20% for cost-sensitivity analyses, and 15% for discount-rate sensitivity analyses. Almost half of the discount-rate sensitivity analyses did not report quantitative results. Articles that reported sensitivity analyses where results crossed the cost-utility threshold above the base-case results (n = 25) were of somewhat higher quality, and were more likely to justify their sensitivity analysis parameters, than those that did not (n = 45), but the overall quality rating was only moderate. Sensitivity analyses for economic parameters are widely reported and often identify whether choosing different assumptions leads to a different conclusion regarding cost effectiveness. Changes in HR-QOL and cost parameters should be used to test alternative guideline recommendations when there is uncertainty regarding these parameters. Changes in discount rates less frequently produce results that would change the conclusion about cost effectiveness. Improving the overall quality of published studies and describing the justifications for parameter ranges would allow more meaningful conclusions to be drawn from sensitivity analyses.
VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox
NASA Astrophysics Data System (ADS)
Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.
2016-12-01
VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.
Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Newman, James C., III; Barnwell, Richard W.; Taylor, Arthur C., III; Hou, Gene J.-W.
1998-01-01
This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics. The focus here is on those methods particularly well- suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid computational fluid dynamics algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid computational fluid dynamics in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.
NASA Technical Reports Server (NTRS)
Smith, S. D.; Tevepaugh, J. A.; Penny, M. M.
1975-01-01
The exhaust plumes of the space shuttle solid rocket motors can have a significant effect on the base pressure and base drag of the shuttle vehicle. A parametric analysis was conducted to assess the sensitivity of the initial plume expansion angle of analytical solid rocket motor flow fields to various analytical input parameters and operating conditions. The results of the analysis are presented and conclusions reached regarding the sensitivity of the initial plume expansion angle to each parameter investigated. Operating conditions parametrically varied were chamber pressure, nozzle inlet angle, nozzle throat radius of curvature ratio and propellant particle loading. Empirical particle parameters investigated were mean size, local drag coefficient and local heat transfer coefficient. Sensitivity of the initial plume expansion angle to gas thermochemistry model and local drag coefficient model assumptions were determined.
Computational methods for efficient structural reliability and reliability sensitivity analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.
1993-01-01
This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.
NASA Astrophysics Data System (ADS)
Hameed, M.; Demirel, M. C.; Moradkhani, H.
2015-12-01
Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.
[Analysis and experimental verification of sensitivity and SNR of laser warning receiver].
Zhang, Ji-Long; Wang, Ming; Tian, Er-Ming; Li, Xiao; Wang, Zhi-Bin; Zhang, Yue
2009-01-01
In order to countermeasure increasingly serious threat from hostile laser in modern war, it is urgent to do research on laser warning technology and system, and the sensitivity and signal to noise ratio (SNR) are two important performance parameters in laser warning system. In the present paper, based on the signal statistical detection theory, a method for calculation of the sensitivity and SNR in coherent detection laser warning receiver (LWR) has been proposed. Firstly, the probabilities of the laser signal and receiver noise were analyzed. Secondly, based on the threshold detection theory and Neyman-Pearson criteria, the signal current equation was established by introducing detection probability factor and false alarm rate factor, then, the mathematical expressions of sensitivity and SNR were deduced. Finally, by using method, the sensitivity and SNR of the sinusoidal grating laser warning receiver developed by our group were analyzed, and the theoretic calculation and experimental results indicate that the SNR analysis method is feasible, and can be used in performance analysis of LWR.
Sensitivity Analysis for Multidisciplinary Systems (SAMS)
2016-12-01
support both mode-based structural representations and time-dependent, nonlinear finite element structural dynamics. This interim report describes...Adaptation, & Sensitivity Toolkit • Elasticity, heat transfer, & compressible flow • Adjoint solver for sensitivity analysis • High-order finite elements ...PROGRAM ELEMENT NUMBER 62201F 6. AUTHOR(S) Richard D. Snyder 5d. PROJECT NUMBER 2401 5e. TASK NUMBER N/A 5f. WORK UNIT NUMBER Q1FS 7
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
A value-based medicine cost-utility analysis of idiopathic epiretinal membrane surgery.
Gupta, Omesh P; Brown, Gary C; Brown, Melissa M
2008-05-01
To perform a reference case, cost-utility analysis of epiretinal membrane (ERM) surgery using current literature on outcomes and complications. Computer-based, value-based medicine analysis. Decision analyses were performed under two scenarios: ERM surgery in better-seeing eye and ERM surgery in worse-seeing eye. The models applied long-term published data primarily from the Blue Mountains Eye Study and the Beaver Dam Eye Study. Visual acuity and major complications were derived from 25-gauge pars plana vitrectomy studies. Patient-based, time trade-off utility values, Markov modeling, sensitivity analysis, and net present value adjustments were used in the design and calculation of results. Main outcome measures included the number of discounted quality-adjusted-life-years (QALYs) gained and dollars spent per QALY gained. ERM surgery in the better-seeing eye compared with observation resulted in a mean gain of 0.755 discounted QALYs (3% annual rate) per patient treated. This model resulted in $4,680 per QALY for this procedure. When sensitivity analysis was performed, utility values varied from $6,245 to $3,746/QALY gained, medical costs varied from $3,510 to $5,850/QALY gained, and ERM recurrence rate increased to $5,524/QALY. ERM surgery in the worse-seeing eye compared with observation resulted in a mean gain of 0.27 discounted QALYs per patient treated. The $/QALY was $16,146 with a range of $20,183 to $12,110 based on sensitivity analyses. Utility values ranged from $21,520 to $12,916/QALY and ERM recurrence rate increased to $16,846/QALY based on sensitivity analysis. ERM surgery is a very cost-effective procedure when compared with other interventions across medical subspecialties.
Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation
NASA Astrophysics Data System (ADS)
Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter
2015-04-01
Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.
A wideband FMBEM for 2D acoustic design sensitivity analysis based on direct differentiation method
NASA Astrophysics Data System (ADS)
Chen, Leilei; Zheng, Changjun; Chen, Haibo
2013-09-01
This paper presents a wideband fast multipole boundary element method (FMBEM) for two dimensional acoustic design sensitivity analysis based on the direct differentiation method. The wideband fast multipole method (FMM) formed by combining the original FMM and the diagonal form FMM is used to accelerate the matrix-vector products in the boundary element analysis. The Burton-Miller formulation is used to overcome the fictitious frequency problem when using a single Helmholtz boundary integral equation for exterior boundary-value problems. The strongly singular and hypersingular integrals in the sensitivity equations can be evaluated explicitly and directly by using the piecewise constant discretization. The iterative solver GMRES is applied to accelerate the solution of the linear system of equations. A set of optimal parameters for the wideband FMBEM design sensitivity analysis are obtained by observing the performances of the wideband FMM algorithm in terms of computing time and memory usage. Numerical examples are presented to demonstrate the efficiency and validity of the proposed algorithm.
Design sensitivity analysis and optimization tool (DSO) for sizing design applications
NASA Technical Reports Server (NTRS)
Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa
1992-01-01
The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.
Kim, Min-Uk; Moon, Kyong Whan; Sohn, Jong-Ryeul; Byeon, Sang-Hoon
2018-05-18
We studied sensitive weather variables for consequence analysis, in the case of chemical leaks on the user side of offsite consequence analysis (OCA) tools. We used OCA tools Korea Offsite Risk Assessment (KORA) and Areal Location of Hazardous Atmospheres (ALOHA) in South Korea and the United States, respectively. The chemicals used for this analysis were 28% ammonia (NH₃), 35% hydrogen chloride (HCl), 50% hydrofluoric acid (HF), and 69% nitric acid (HNO₃). The accident scenarios were based on leakage accidents in storage tanks. The weather variables were air temperature, wind speed, humidity, and atmospheric stability. Sensitivity analysis was performed using the Statistical Package for the Social Sciences (SPSS) program for dummy regression analysis. Sensitivity analysis showed that impact distance was not sensitive to humidity. Impact distance was most sensitive to atmospheric stability, and was also more sensitive to air temperature than wind speed, according to both the KORA and ALOHA tools. Moreover, the weather variables were more sensitive in rural conditions than in urban conditions, with the ALOHA tool being more influenced by weather variables than the KORA tool. Therefore, if using the ALOHA tool instead of the KORA tool in rural conditions, users should be careful not to cause any differences in impact distance due to input errors of weather variables, with the most sensitive one being atmospheric stability.
NASA Astrophysics Data System (ADS)
Park, Jihoon; Yang, Guang; Satija, Addy; Scheidt, Céline; Caers, Jef
2016-12-01
Sensitivity analysis plays an important role in geoscientific computer experiments, whether for forecasting, data assimilation or model calibration. In this paper we focus on an extension of a method of regionalized sensitivity analysis (RSA) to applications typical in the Earth Sciences. Such applications involve the building of large complex spatial models, the application of computationally extensive forward modeling codes and the integration of heterogeneous sources of model uncertainty. The aim of this paper is to be practical: 1) provide a Matlab code, 2) provide novel visualization methods to aid users in getting a better understanding in the sensitivity 3) provide a method based on kernel principal component analysis (KPCA) and self-organizing maps (SOM) to account for spatial uncertainty typical in Earth Science applications and 4) provide an illustration on a real field case where the above mentioned complexities present themselves. We present methods that extend the original RSA method in several ways. First we present the calculation of conditional effects, defined as the sensitivity of a parameter given a level of another parameters. Second, we show how this conditional effect can be used to choose nominal values or ranges to fix insensitive parameters aiming to minimally affect uncertainty in the response. Third, we develop a method based on KPCA and SOM to assign a rank to spatial models in order to calculate the sensitivity on spatial variability in the models. A large oil/gas reservoir case is used as illustration of these ideas.
NASA Technical Reports Server (NTRS)
Ustinov, E.
1999-01-01
Sensitivity analysis based on using of the adjoint equation of radiative transfer is applied to the case of atmospheric remote sensing in the thermal spectral region with non-negligeable atmospheric scattering.
Voulgarelis, Dimitrios; Velayudhan, Ajoy; Smith, Frank
2017-01-01
Agent-based models provide a formidable tool for exploring complex and emergent behaviour of biological systems as well as accurate results but with the drawback of needing a lot of computational power and time for subsequent analysis. On the other hand, equation-based models can more easily be used for complex analysis in a much shorter timescale. This paper formulates an ordinary differential equations and stochastic differential equations model to capture the behaviour of an existing agent-based model of tumour cell reprogramming and applies it to optimization of possible treatment as well as dosage sensitivity analysis. For certain values of the parameter space a close match between the equation-based and agent-based models is achieved. The need for division of labour between the two approaches is explored. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Performance evaluation of a lossy transmission lines based diode detector at cryogenic temperature.
Villa, E; Aja, B; de la Fuente, L; Artal, E
2016-01-01
This work is focused on the design, fabrication, and performance analysis of a square-law Schottky diode detector based on lossy transmission lines working under cryogenic temperature (15 K). The design analysis of a microwave detector, based on a planar gallium-arsenide low effective Schottky barrier height diode, is reported, which is aimed for achieving large input return loss as well as flat sensitivity versus frequency. The designed circuit demonstrates good sensitivity, as well as a good return loss in a wide bandwidth at Ka-band, at both room (300 K) and cryogenic (15 K) temperatures. A good sensitivity of 1000 mV/mW and input return loss better than 12 dB have been achieved when it works as a zero-bias Schottky diode detector at room temperature, increasing the sensitivity up to a minimum of 2200 mV/mW, with the need of a DC bias current, at cryogenic temperature.
Single-molecule detection: applications to ultrasensitive biochemical analysis
NASA Astrophysics Data System (ADS)
Castro, Alonso; Shera, E. Brooks
1995-06-01
Recent developments in laser-based detection of fluorescent molecules have made possible the implementation of very sensitive techniques for biochemical analysis. We present and discuss our experiments on the applications of our recently developed technique of single-molecule detection to the analysis of molecules of biological interest. These newly developed methods are capable of detecting and identifying biomolecules at the single-molecule level of sensitivity. In one case, identification is based on measuring fluorescence brightness from single molecules. In another, molecules are classified by determining their electrophoretic velocities.
Design sensitivity analysis using EAL. Part 1: Conventional design parameters
NASA Technical Reports Server (NTRS)
Dopker, B.; Choi, Kyung K.; Lee, J.
1986-01-01
A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.
2010-01-01
Multi-Disciplinary, Multi-Output Sensitivity Analysis ( MIMOSA ) .........29 3.1 Introduction to Research Thrust 1...39 3.3 MIMOSA Approach ..........................................................................................41 3.3.1...Collaborative Consistency of MIMOSA .......................................................41 3.3.2 Formulation of MIMOSA
Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil
NASA Technical Reports Server (NTRS)
Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris
2016-01-01
Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.
Ethical Sensitivity in Nursing Ethical Leadership: A Content Analysis of Iranian Nurses Experiences
Esmaelzadeh, Fatemeh; Abbaszadeh, Abbas; Borhani, Fariba; Peyrovi, Hamid
2017-01-01
Background: Considering that many nursing actions affect other people’s health and life, sensitivity to ethics in nursing practice is highly important to ethical leaders as a role model. Objective: The study aims to explore ethical sensitivity in ethical nursing leaders in Iran. Method: This was a qualitative study based on the conventional content analysis in 2015. Data were collected using deep and semi-structured interviews with 20 Iranian nurses. The participants were chosen using purposive sampling. Data were analyzed using conventional content analysis. In order to increase the accuracy and integrity of the data, Lincoln and Guba's criteria were considered. Results: Fourteen sub-categories and five main categories emerged. Main categories consisted of sensitivity to care, sensitivity to errors, sensitivity to communication, sensitivity in decision making and sensitivity to ethical practice. Conclusion: Ethical sensitivity appears to be a valuable attribute for ethical nurse leaders, having an important effect on various aspects of professional practice and help the development of ethics in nursing practice. PMID:28584564
Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Eleshaky, Mohamed E.
1991-01-01
A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.
NASA Astrophysics Data System (ADS)
Dasgupta, Sambarta
Transient stability and sensitivity analysis of power systems are problems of enormous academic and practical interest. These classical problems have received renewed interest, because of the advancement in sensor technology in the form of phasor measurement units (PMUs). The advancement in sensor technology has provided unique opportunity for the development of real-time stability monitoring and sensitivity analysis tools. Transient stability problem in power system is inherently a problem of stability analysis of the non-equilibrium dynamics, because for a short time period following a fault or disturbance the system trajectory moves away from the equilibrium point. The real-time stability decision has to be made over this short time period. However, the existing stability definitions and hence analysis tools for transient stability are asymptotic in nature. In this thesis, we discover theoretical foundations for the short-term transient stability analysis of power systems, based on the theory of normally hyperbolic invariant manifolds and finite time Lyapunov exponents, adopted from geometric theory of dynamical systems. The theory of normally hyperbolic surfaces allows us to characterize the rate of expansion and contraction of co-dimension one material surfaces in the phase space. The expansion and contraction rates of these material surfaces can be computed in finite time. We prove that the expansion and contraction rates can be used as finite time transient stability certificates. Furthermore, material surfaces with maximum expansion and contraction rate are identified with the stability boundaries. These stability boundaries are used for computation of stability margin. We have used the theoretical framework for the development of model-based and model-free real-time stability monitoring methods. Both the model-based and model-free approaches rely on the availability of high resolution time series data from the PMUs for stability prediction. The problem of sensitivity analysis of power system, subjected to changes or uncertainty in load parameters and network topology, is also studied using the theory of normally hyperbolic manifolds. The sensitivity analysis is used for the identification and rank ordering of the critical interactions and parameters in the power network. The sensitivity analysis is carried out both in finite time and in asymptotic. One of the distinguishing features of the asymptotic sensitivity analysis is that the asymptotic dynamics of the system is assumed to be a periodic orbit. For asymptotic sensitivity analysis we employ combination of tools from ergodic theory and geometric theory of dynamical systems.
Nestorov, I A; Aarons, L J; Rowland, M
1997-08-01
Sensitivity analysis studies the effects of the inherent variability and uncertainty in model parameters on the model outputs and may be a useful tool at all stages of the pharmacokinetic modeling process. The present study examined the sensitivity of a whole-body physiologically based pharmacokinetic (PBPK) model for the distribution kinetics of nine 5-n-alkyl-5-ethyl barbituric acids in arterial blood and 14 tissues (lung, liver, kidney, stomach, pancreas, spleen, gut, muscle, adipose, skin, bone, heart, brain, testes) after i.v. bolus administration to rats. The aims were to obtain new insights into the model used, to rank the model parameters involved according to their impact on the model outputs and to study the changes in the sensitivity induced by the increase in the lipophilicity of the homologues on ascending the series. Two approaches for sensitivity analysis have been implemented. The first, based on the Matrix Perturbation Theory, uses a sensitivity index defined as the normalized sensitivity of the 2-norm of the model compartmental matrix to perturbations in its entries. The second approach uses the traditional definition of the normalized sensitivity function as the relative change in a model state (a tissue concentration) corresponding to a relative change in a model parameter. Autosensitivity has been defined as sensitivity of a state to any of its parameters; cross-sensitivity as the sensitivity of a state to any other states' parameters. Using the two approaches, the sensitivity of representative tissue concentrations (lung, liver, kidney, stomach, gut, adipose, heart, and brain) to the following model parameters: tissue-to-unbound plasma partition coefficients, tissue blood flows, unbound renal and intrinsic hepatic clearance, permeability surface area product of the brain, have been analyzed. Both the tissues and the parameters were ranked according to their sensitivity and impact. The following general conclusions were drawn: (i) the overall sensitivity of the system to all parameters involved is small due to the weak connectivity of the system structure; (ii) the time course of both the auto- and cross-sensitivity functions for all tissues depends on the dynamics of the tissues themselves, e.g., the higher the perfusion of a tissue, the higher are both its cross-sensitivity to other tissues' parameters and the cross-sensitivities of other tissues to its parameters; and (iii) with a few exceptions, there is not a marked influence of the lipophilicity of the homologues on either the pattern or the values of the sensitivity functions. The estimates of the sensitivity and the subsequent tissue and parameter rankings may be extended to other drugs, sharing the same common structure of the whole body PBPK model, and having similar model parameters. Results show also that the computationally simple Matrix Perturbation Analysis should be used only when an initial idea about the sensitivity of a system is required. If comprehensive information regarding the sensitivity is needed, the numerically expensive Direct Sensitivity Analysis should be used.
NASA Astrophysics Data System (ADS)
Núñez, M.; Robie, T.; Vlachos, D. G.
2017-10-01
Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).
Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis
Adnan, Tassha Hilda
2016-01-01
Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2014-12-01
Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.
Antón, Alfonso; Pazos, Marta; Martín, Belén; Navero, José Manuel; Ayala, Miriam Eleonora; Castany, Marta; Martínez, Patricia; Bardavío, Javier
2013-01-01
To assess sensitivity, specificity, and agreement among automated event analysis, automated trend analysis, and expert evaluation to detect glaucoma progression. This was a prospective study that included 37 eyes with a follow-up of 36 months. All had glaucomatous disks and fields and performed reliable visual fields every 6 months. Each series of fields was assessed with 3 different methods: subjective assessment by 2 independent teams of glaucoma experts, glaucoma/guided progression analysis (GPA) event analysis, and GPA (visual field index-based) trend analysis. Kappa agreement coefficient between methods and sensitivity and specificity for each method using expert opinion as gold standard were calculated. The incidence of glaucoma progression was 16% to 18% in 3 years but only 3 cases showed progression with all 3 methods. Kappa agreement coefficient was high (k=0.82) between subjective expert assessment and GPA event analysis, and only moderate between these two and GPA trend analysis (k=0.57). Sensitivity and specificity for GPA event and GPA trend analysis were 71% and 96%, and 57% and 93%, respectively. The 3 methods detected similar numbers of progressing cases. The GPA event analysis and expert subjective assessment showed high agreement between them and moderate agreement with GPA trend analysis. In a period of 3 years, both methods of GPA analysis offered high specificity, event analysis showed 83% sensitivity, and trend analysis had a 66% sensitivity.
NASA Astrophysics Data System (ADS)
Siadaty, Moein; Kazazi, Mohsen
2018-04-01
Convective heat transfer, entropy generation and pressure drop of two water based nanofluids (Cu-water and Al2O3-water) in horizontal annular tubes are scrutinized by means of computational fluids dynamics, response surface methodology and sensitivity analysis. First, central composite design is used to perform a series of experiments with diameter ratio, length to diameter ratio, Reynolds number and solid volume fraction. Then, CFD is used to calculate the Nusselt Number, Euler number and entropy generation. After that, RSM is applied to fit second order polynomials on responses. Finally, sensitivity analysis is conducted to manage the above mentioned parameters inside tube. Totally, 62 different cases are examined. CFD results show that Cu-water and Al2O3-water have the highest and lowest heat transfer rate, respectively. In addition, analysis of variances indicates that increase in solid volume fraction increases dimensionless pressure drop for Al2O3-water. Moreover, it has a significant negative and insignificant effects on Cu-water Nusselt and Euler numbers, respectively. Analysis of Bejan number indicates that frictional and thermal entropy generations are the dominant irreversibility in Al2O3-water and Cu-water flows, respectively. Sensitivity analysis indicates dimensionless pressure drop sensitivity to tube length for Cu-water is independent of its diameter ratio at different Reynolds numbers.
Ethical sensitivity in professional practice: concept analysis.
Weaver, Kathryn; Morse, Janice; Mitcham, Carl
2008-06-01
This paper is a report of a concept analysis of ethical sensitivity. Ethical sensitivity enables nurses and other professionals to respond morally to the suffering and vulnerability of those receiving professional care and services. Because of its significance to nursing and other professional practices, ethical sensitivity deserves more focused analysis. A criteria-based method oriented toward pragmatic utility guided the analysis of 200 papers and books from the fields of nursing, medicine, psychology, dentistry, clinical ethics, theology, education, law, accounting or business, journalism, philosophy, political and social sciences and women's studies. This literature spanned 1970 to 2006 and was sorted by discipline and concept dimensions and examined for concept structure and use across various contexts. The analysis was completed in September 2007. Ethical sensitivity in professional practice develops in contexts of uncertainty, client suffering and vulnerability, and through relationships characterized by receptivity, responsiveness and courage on the part of professionals. Essential attributes of ethical sensitivity are identified as moral perception, affectivity and dividing loyalties. Outcomes include integrity preserving decision-making, comfort and well-being, learning and professional transcendence. Our findings promote ethical sensitivity as a type of practical wisdom that pursues client comfort and professional satisfaction with care delivery. The analysis and resulting model offers an inclusive view of ethical sensitivity that addresses some of the limitations with prior conceptualizations.
Pleshakova, Tatyana O; Malsagova, Kristina A; Kaysheva, Anna L; Kopylov, Arthur T; Tatur, Vadim Yu; Ziborov, Vadim S; Kanashenko, Sergey L; Galiullin, Rafael A; Ivanov, Yuri D
2017-08-01
We report here the highly sensitive detection of protein in solution at concentrations from 10 -15 to 10 -18 m using the combination of atomic force microscopy (AFM) and mass spectrometry. Biospecific detection of biotinylated bovine serum albumin was carried out by fishing out the protein onto the surface of AFM chips with immobilized avidin, which determined the specificity of the analysis. Electrical stimulation was applied to enhance the fishing efficiency. A high sensitivity of detection was achieved by application of nanosecond electric pulses to highly oriented pyrolytic graphite placed under the AFM chip. A peristaltic pump-based flow system, which is widely used in routine bioanalytical assays, was employed throughout the analysis. These results hold promise for the development of highly sensitive protein detection methods using nanosensor devices.
Zhong, Lin-sheng; Tang, Cheng-cai; Guo, Hua
2010-07-01
Based on the statistical data of natural ecology and social economy in Jinyintan Grassland Scenic Area in Qinghai Province in 2008, an evaluation index system for the ecological sensitivity of this area was established from the aspects of protected area rank, vegetation type, slope, and land use type. The ecological sensitivity of the sub-areas with higher tourism value and ecological function in the area was evaluated, and the tourism function zoning of these sub-areas was made by the technology of GIS and according to the analysis of eco-environmental characteristics and ecological sensitivity of each sensitive sub-area. It was suggested that the Jinyintan Grassland Scenic Area could be divided into three ecological sensitivity sub-areas (high, moderate, and low), three tourism functional sub-areas (restricted development ecotourism, moderate development ecotourism, and mass tourism), and six tourism functional sub-areas (wetland protection, primitive ecological sightseeing, agriculture and pasture tourism, grassland tourism, town tourism, and rural tourism).
Breathing dynamics based parameter sensitivity analysis of hetero-polymeric DNA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talukder, Srijeeta; Sen, Shrabani; Chaudhury, Pinaki, E-mail: pinakc@rediffmail.com
We study the parameter sensitivity of hetero-polymeric DNA within the purview of DNA breathing dynamics. The degree of correlation between the mean bubble size and the model parameters is estimated for this purpose for three different DNA sequences. The analysis leads us to a better understanding of the sequence dependent nature of the breathing dynamics of hetero-polymeric DNA. Out of the 14 model parameters for DNA stability in the statistical Poland-Scheraga approach, the hydrogen bond interaction ε{sub hb}(AT) for an AT base pair and the ring factor ξ turn out to be the most sensitive parameters. In addition, the stackingmore » interaction ε{sub st}(TA-TA) for an TA-TA nearest neighbor pair of base-pairs is found to be the most sensitive one among all stacking interactions. Moreover, we also establish that the nature of stacking interaction has a deciding effect on the DNA breathing dynamics, not the number of times a particular stacking interaction appears in a sequence. We show that the sensitivity analysis can be used as an effective measure to guide a stochastic optimization technique to find the kinetic rate constants related to the dynamics as opposed to the case where the rate constants are measured using the conventional unbiased way of optimization.« less
NASA Astrophysics Data System (ADS)
Hwang, Joonki; Lee, Sangyeop; Choo, Jaebum
2016-06-01
A novel surface-enhanced Raman scattering (SERS)-based lateral flow immunoassay (LFA) biosensor was developed to resolve problems associated with conventional LFA strips (e.g., limits in quantitative analysis and low sensitivity). In our SERS-based biosensor, Raman reporter-labeled hollow gold nanospheres (HGNs) were used as SERS detection probes instead of gold nanoparticles. With the proposed SERS-based LFA strip, the presence of a target antigen can be identified through a colour change in the test zone. Furthermore, highly sensitive quantitative evaluation is possible by measuring SERS signals from the test zone. To verify the feasibility of the SERS-based LFA strip platform, an immunoassay of staphylococcal enterotoxin B (SEB) was performed as a model reaction. The limit of detection (LOD) for SEB, as determined with the SERS-based LFA strip, was estimated to be 0.001 ng mL-1. This value is approximately three orders of magnitude more sensitive than that achieved with the corresponding ELISA-based method. The proposed SERS-based LFA strip sensor shows significant potential for the rapid and sensitive detection of target markers in a simplified manner.A novel surface-enhanced Raman scattering (SERS)-based lateral flow immunoassay (LFA) biosensor was developed to resolve problems associated with conventional LFA strips (e.g., limits in quantitative analysis and low sensitivity). In our SERS-based biosensor, Raman reporter-labeled hollow gold nanospheres (HGNs) were used as SERS detection probes instead of gold nanoparticles. With the proposed SERS-based LFA strip, the presence of a target antigen can be identified through a colour change in the test zone. Furthermore, highly sensitive quantitative evaluation is possible by measuring SERS signals from the test zone. To verify the feasibility of the SERS-based LFA strip platform, an immunoassay of staphylococcal enterotoxin B (SEB) was performed as a model reaction. The limit of detection (LOD) for SEB, as determined with the SERS-based LFA strip, was estimated to be 0.001 ng mL-1. This value is approximately three orders of magnitude more sensitive than that achieved with the corresponding ELISA-based method. The proposed SERS-based LFA strip sensor shows significant potential for the rapid and sensitive detection of target markers in a simplified manner. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr07243c
Sensitivity-Uncertainty Based Nuclear Criticality Safety Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
2016-09-20
These are slides from a seminar given to the University of Mexico Nuclear Engineering Department. Whisper is a statistical analysis package developed to support nuclear criticality safety validation. It uses the sensitivity profile data for an application as computed by MCNP6 along with covariance files for the nuclear data to determine a baseline upper-subcritical-limit for the application. Whisper and its associated benchmark files are developed and maintained as part of MCNP6, and will be distributed with all future releases of MCNP6. Although sensitivity-uncertainty methods for NCS validation have been under development for 20 years, continuous-energy Monte Carlo codes such asmore » MCNP could not determine the required adjoint-weighted tallies for sensitivity profiles. The recent introduction of the iterated fission probability method into MCNP led to the rapid development of sensitivity analysis capabilities for MCNP6 and the development of Whisper. Sensitivity-uncertainty based methods represent the future for NCS validation – making full use of today’s computer power to codify past approaches based largely on expert judgment. Validation results are defensible, auditable, and repeatable as needed with different assumptions and process models. The new methods can supplement, support, and extend traditional validation approaches.« less
Grid sensitivity for aerodynamic optimization and flow analysis
NASA Technical Reports Server (NTRS)
Sadrehaghighi, I.; Tiwari, S. N.
1993-01-01
After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.
Sensitivity Enhancement of FBG-Based Strain Sensor.
Li, Ruiya; Chen, Yiyang; Tan, Yuegang; Zhou, Zude; Li, Tianliang; Mao, Jian
2018-05-17
A novel fiber Bragg grating (FBG)-based strain sensor with a high-sensitivity is presented in this paper. The proposed FBG-based strain sensor enhances sensitivity by pasting the FBG on a substrate with a lever structure. This typical mechanical configuration mechanically amplifies the strain of the FBG to enhance overall sensitivity. As this mechanical configuration has a high stiffness, the proposed sensor can achieve a high resonant frequency and a wide dynamic working range. The sensing principle is presented, and the corresponding theoretical model is derived and validated. Experimental results demonstrate that the developed FBG-based strain sensor achieves an enhanced strain sensitivity of 6.2 pm/με, which is consistent with the theoretical analysis result. The strain sensitivity of the developed sensor is 5.2 times of the strain sensitivity of a bare fiber Bragg grating strain sensor. The dynamic characteristics of this sensor are investigated through the finite element method (FEM) and experimental tests. The developed sensor exhibits an excellent strain-sensitivity-enhancing property in a wide frequency range. The proposed high-sensitivity FBG-based strain sensor can be used for small-amplitude micro-strain measurement in harsh industrial environments.
Sensitivity Enhancement of FBG-Based Strain Sensor
Chen, Yiyang; Tan, Yuegang; Zhou, Zude; Mao, Jian
2018-01-01
A novel fiber Bragg grating (FBG)-based strain sensor with a high-sensitivity is presented in this paper. The proposed FBG-based strain sensor enhances sensitivity by pasting the FBG on a substrate with a lever structure. This typical mechanical configuration mechanically amplifies the strain of the FBG to enhance overall sensitivity. As this mechanical configuration has a high stiffness, the proposed sensor can achieve a high resonant frequency and a wide dynamic working range. The sensing principle is presented, and the corresponding theoretical model is derived and validated. Experimental results demonstrate that the developed FBG-based strain sensor achieves an enhanced strain sensitivity of 6.2 pm/με, which is consistent with the theoretical analysis result. The strain sensitivity of the developed sensor is 5.2 times of the strain sensitivity of a bare fiber Bragg grating strain sensor. The dynamic characteristics of this sensor are investigated through the finite element method (FEM) and experimental tests. The developed sensor exhibits an excellent strain-sensitivity-enhancing property in a wide frequency range. The proposed high-sensitivity FBG-based strain sensor can be used for small-amplitude micro-strain measurement in harsh industrial environments. PMID:29772826
Infiltration modeling guidelines for commercial building energy analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gowri, Krishnan; Winiarski, David W.; Jarnagin, Ronald E.
This report presents a methodology for modeling air infiltration in EnergyPlus to account for envelope air barrier characteristics. Based on a review of various infiltration modeling options available in EnergyPlus and sensitivity analysis, the linear wind velocity coefficient based on DOE-2 infiltration model is recommended. The methodology described in this report can be used to calculate the EnergyPlus infiltration input for any given building level infiltration rate specified at known pressure difference. The sensitivity analysis shows that EnergyPlus calculates the wind speed based on zone altitude, and the linear wind velocity coefficient represents the variation in infiltration heat loss consistentmore » with building location and weather data.« less
Weighting-Based Sensitivity Analysis in Causal Mediation Studies
ERIC Educational Resources Information Center
Hong, Guanglei; Qin, Xu; Yang, Fan
2018-01-01
Through a sensitivity analysis, the analyst attempts to determine whether a conclusion of causal inference could be easily reversed by a plausible violation of an identification assumption. Analytic conclusions that are harder to alter by such a violation are expected to add a higher value to scientific knowledge about causality. This article…
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Hansen, James E. (Technical Monitor)
2001-01-01
A new approach is presented for the analysis of feedback processes in a nonlinear dynamical system by observing its variations. The new methodology consists of statistical estimates of the sensitivities between all pairs of variables in the system based on a neural network modeling of the dynamical system. The model can then be used to estimate the instantaneous, multivariate and nonlinear sensitivities, which are shown to be essential for the analysis of the feedbacks processes involved in the dynamical system. The method is described and tested on synthetic data from the low-order Lorenz circulation model where the correct sensitivities can be evaluated analytically.
The Volatility of Data Space: Topology Oriented Sensitivity Analysis
Du, Jing; Ligmann-Zielinska, Arika
2015-01-01
Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929
Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem
Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; ...
2015-01-01
In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.
Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.
In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.
Mallorie, Amy; Goldring, James; Patel, Anant; Lim, Eric; Wagner, Thomas
2017-08-01
Lymph node involvement in non-small-cell lung cancer (NSCLC) is a major factor in determining management and prognosis. We aimed to evaluate the accuracy of fluorine-18-fluorodeoxyglucose-PET/computed tomography (CT) for the assessment of nodal involvement in patients with NSCLC. In this retrospective study, we included 61 patients with suspected or confirmed resectable NSCLC over a 2-year period from April 2013 to April 2015. 221 nodes with pathological staging from surgery or endobronchial ultrasound-guided transbronchial needle aspiration were assessed using a nodal station-based analysis with original clinical reports and three different cut-offs: mediastinal blood pool (MBP), liver background and tumour standardized uptake value maximal (SUVmax)/2. Using nodal station-based analysis for activity more than tumour SUVmax/2, the sensitivity was 45%, the specificity was 89% and the negative predictive value (NPV) was 87%. For activity more than MBP, the sensitivity was 93%, the specificity was 72% and NPV was 98%. For activity more than liver background, the sensitivity was 83%, the specificity was 84% and NPV was 96%. Using a nodal staging-based analysis for accuracy at detecting N2/3 disease, for activity more than tumour SUVmax/2, the sensitivity was 59%, the specificity was 85% and NPV was 80%. For activity more than MBP, the sensitivity was 95%, the specificity was 61% and NPV was 96%. For activity more than liver background, the sensitivity was 86%, the specificity was 81% and NPV was 92%. Receiver-operating characteristic analysis showed the optimal nodal SUVmax to be more than 6.4 with a sensitivity of 45% and a specificity of 95%, with an area under the curve of 0.85. Activity more than MBP was the most sensitive cut-off with the highest sensitivity and NPV. Activity more than primary tumour SUVmax/2 was the most specific cut-off. Nodal SUVmax more than 6.4 has a high specificity of 95%.
Adjoint Sensitivity Analysis for Scale-Resolving Turbulent Flow Solvers
NASA Astrophysics Data System (ADS)
Blonigan, Patrick; Garai, Anirban; Diosady, Laslo; Murman, Scott
2017-11-01
Adjoint-based sensitivity analysis methods are powerful design tools for engineers who use computational fluid dynamics. In recent years, these engineers have started to use scale-resolving simulations like large-eddy simulations (LES) and direct numerical simulations (DNS), which resolve more scales in complex flows with unsteady separation and jets than the widely-used Reynolds-averaged Navier-Stokes (RANS) methods. However, the conventional adjoint method computes large, unusable sensitivities for scale-resolving simulations, which unlike RANS simulations exhibit the chaotic dynamics inherent in turbulent flows. Sensitivity analysis based on least-squares shadowing (LSS) avoids the issues encountered by conventional adjoint methods, but has a high computational cost even for relatively small simulations. The following talk discusses a more computationally efficient formulation of LSS, ``non-intrusive'' LSS, and its application to turbulent flows simulated with a discontinuous-Galkerin spectral-element-method LES/DNS solver. Results are presented for the minimal flow unit, a turbulent channel flow with a limited streamwise and spanwise domain.
Sensitivity analysis of a sound absorption model with correlated inputs
NASA Astrophysics Data System (ADS)
Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.
2017-04-01
Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.
Yu, Hye-Weon; Jang, Am; Kim, Lan Hee; Kim, Sung-Jo; Kim, In S
2011-09-15
Due to the increased occurrence of cyanobacterial blooms and their toxins in drinking water sources, effective management based on a sensitive and rapid analytical method is in high demand for security of safe water sources and environmental human health. Here, a competitive fluorescence immunoassay of microcystin-LR (MCYST-LR) is developed in an attempt to improve the sensitivity, analysis time, and ease-of-manipulation of analysis. To serve this aim, a bead-based suspension assay was introduced based on two major sensing elements: an antibody-conjugated quantum dot (QD) detection probe and an antigen-immobilized magnetic bead (MB) competitor. The assay was composed of three steps: the competitive immunological reaction of QD detection probes against analytes and MB competitors, magnetic separation and washing, and the optical signal generation of QDs. The fluorescence intensity was found to be inversely proportional to the MCYST-LR concentration. Under optimized conditions, the proposed assay performed well for the identification and quantitative analysis of MCYST-LR (within 30 min in the range of 0.42-25 μg/L, with a limit of detection of 0.03 μg/L). It is thus expected that this enhanced assay can contribute both to the sensitive and rapid diagnosis of cyanotoxin risk in drinking water and effective management procedures.
Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra; ...
2017-11-20
The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less
Analysis of the sensitivity properties of a model of vector-borne bubonic plague.
Buzby, Megan; Neckels, David; Antolin, Michael F; Estep, Donald
2008-09-06
Model sensitivity is a key to evaluation of mathematical models in ecology and evolution, especially in complex models with numerous parameters. In this paper, we use some recently developed methods for sensitivity analysis to study the parameter sensitivity of a model of vector-borne bubonic plague in a rodent population proposed by Keeling & Gilligan. The new sensitivity tools are based on a variational analysis involving the adjoint equation. The new approach provides a relatively inexpensive way to obtain derivative information about model output with respect to parameters. We use this approach to determine the sensitivity of a quantity of interest (the force of infection from rats and their fleas to humans) to various model parameters, determine a region over which linearization at a specific parameter reference point is valid, develop a global picture of the output surface, and search for maxima and minima in a given region in the parameter space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra
The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less
SVDS plume impingement modeling development. Sensitivity analysis supporting level B requirements
NASA Technical Reports Server (NTRS)
Chiu, P. B.; Pearson, D. J.; Muhm, P. M.; Schoonmaker, P. B.; Radar, R. J.
1977-01-01
A series of sensitivity analyses (trade studies) performed to select features and capabilities to be implemented in the plume impingement model is described. Sensitivity analyses were performed in study areas pertaining to geometry, flowfield, impingement, and dynamical effects. Recommendations based on these analyses are summarized.
Childs, Paul; Wong, Allan C L; Fu, H Y; Liao, Yanbiao; Tam, Hwayaw; Lu, Chao; Wai, P K A
2010-12-20
We measured the hydrostatic pressure dependence of the birefringence and birefringent dispersion of a Sagnac interferometric sensor incorporating a length of highly birefringent photonic crystal fiber using Fourier analysis. Sensitivity of both the phase and chirp spectra to hydrostatic pressure is demonstrated. Using this analysis, phase-based measurements showed a good linearity with an effective sensitivity of 9.45 nm/MPa and an accuracy of ±7.8 kPa using wavelength-encoded data and an effective sensitivity of -55.7 cm(-1)/MPa and an accuracy of ±4.4 kPa using wavenumber-encoded data. Chirp-based measurements, though nonlinear in response, showed an improvement in accuracy at certain pressure ranges with an accuracy of ±5.5 kPa for the full range of measured pressures using wavelength-encoded data and dropping to within ±2.5 kPa in the range of 0.17 to 0.4 MPa using wavenumber-encoded data. Improvements of the accuracy demonstrated the usefulness of implementing chirp-based analysis for sensing purposes.
Surrogate-based Analysis and Optimization
NASA Technical Reports Server (NTRS)
Queipo, Nestor V.; Haftka, Raphael T.; Shyy, Wei; Goel, Tushar; Vaidyanathan, Raj; Tucker, P. Kevin
2005-01-01
A major challenge to the successful full-scale development of modem aerospace systems is to address competing objectives such as improved performance, reduced costs, and enhanced safety. Accurate, high-fidelity models are typically time consuming and computationally expensive. Furthermore, informed decisions should be made with an understanding of the impact (global sensitivity) of the design variables on the different objectives. In this context, the so-called surrogate-based approach for analysis and optimization can play a very valuable role. The surrogates are constructed using data drawn from high-fidelity models, and provide fast approximations of the objectives and constraints at new design points, thereby making sensitivity and optimization studies feasible. This paper provides a comprehensive discussion of the fundamental issues that arise in surrogate-based analysis and optimization (SBAO), highlighting concepts, methods, techniques, as well as practical implications. The issues addressed include the selection of the loss function and regularization criteria for constructing the surrogates, design of experiments, surrogate selection and construction, sensitivity analysis, convergence, and optimization. The multi-objective optimal design of a liquid rocket injector is presented to highlight the state of the art and to help guide future efforts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zambon, Ilaria, E-mail: ilaria.zambon@unitus.it; Colantoni, Andrea; Carlucci, Margherita
Land Degradation (LD) in socio-environmental systems negatively impacts sustainable development paths. This study proposes a framework to LD evaluation based on indicators of diversification in the spatial distribution of sensitive land. We hypothesize that conditions for spatial heterogeneity in a composite index of land sensitivity are more frequently associated to areas prone to LD than spatial homogeneity. Spatial heterogeneity is supposed to be associated with degraded areas that act as hotspots for future degradation processes. A diachronic analysis (1960–2010) was performed at the Italian agricultural district scale to identify environmental factors associated with spatial heterogeneity in the degree of landmore » sensitivity to degradation based on the Environmentally Sensitive Area Index (ESAI). In 1960, diversification in the level of land sensitivity measured using two common indexes of entropy (Shannon's diversity and Pielou's evenness) increased significantly with the ESAI, indicating a high level of land sensitivity to degradation. In 2010, surface area classified as “critical” to LD was the highest in districts with diversification in the spatial distribution of ESAI values, confirming the hypothesis formulated above. Entropy indexes, based on observed alignment with the concept of LD, constitute a valuable base to inform mitigation strategies against desertification. - Highlights: • Spatial heterogeneity is supposed to be associated with degraded areas. • Entropy indexes can inform mitigation strategies against desertification. • Assessing spatial diversification in the degree of land sensitivity to degradation. • Mediterranean rural areas have an evident diversity in agricultural systems. • A diachronic analysis carried out at the Italian agricultural district scale.« less
Performance analysis of higher mode spoof surface plasmon polariton for terahertz sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Haizi; Tu, Wanli; Zhong, Shuncong, E-mail: zhongshuncong@hotmail.com
2015-04-07
We investigated the spoof surface plasmon polaritons (SSPPs) on 1D grooved metal surface for terahertz sensing of refractive index of the filling analyte through a prism-coupling attenuated total reflection setup. From the dispersion relation analysis and the finite element method-based simulation, we revealed that the dispersion curve of SSPP got suppressed as the filling refractive index increased, which cause the coupling resonance frequency redshifting in the reflection spectrum. The simulated results for testing various refractive indexes demonstrated that the incident angle of terahertz radiation has a great effect on the performance of sensing. Smaller incident angle will result in amore » higher sensitive sensing with a narrower detection range. In the meanwhile, the higher order mode SSPP-based sensing has a higher sensitivity with a narrower detection range. The maximum sensitivity is 2.57 THz/RIU for the second-order mode sensing at 45° internal incident angle. The proposed SSPP-based method has great potential for high sensitive terahertz sensing.« less
Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries
Lu, Zhiming
2018-01-30
Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less
Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Zhiming
Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less
interest: mechanical system design sensitivity analysis and optimization of linear and nonlinear structural systems, reliability analysis and reliability-based design optimization, computational methods in committee member, ISSMO; Associate Editor, Mechanics Based Design of Structures and Machines; Associate
Futamure, Sumire; Bonnet, Vincent; Dumas, Raphael; Venture, Gentiane
2017-11-07
This paper presents a method allowing a simple and efficient sensitivity analysis of dynamics parameters of complex whole-body human model. The proposed method is based on the ground reaction and joint moment regressor matrices, developed initially in robotics system identification theory, and involved in the equations of motion of the human body. The regressor matrices are linear relatively to the segment inertial parameters allowing us to use simple sensitivity analysis methods. The sensitivity analysis method was applied over gait dynamics and kinematics data of nine subjects and with a 15 segments 3D model of the locomotor apparatus. According to the proposed sensitivity indices, 76 segments inertial parameters out the 150 of the mechanical model were considered as not influent for gait. The main findings were that the segment masses were influent and that, at the exception of the trunk, moment of inertia were not influent for the computation of the ground reaction forces and moments and the joint moments. The same method also shows numerically that at least 90% of the lower-limb joint moments during the stance phase can be estimated only from a force-plate and kinematics data without knowing any of the segment inertial parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.
An adjoint method of sensitivity analysis for residual vibrations of structures subject to impacts
NASA Astrophysics Data System (ADS)
Yan, Kun; Cheng, Gengdong
2018-03-01
For structures subject to impact loads, the residual vibration reduction is more and more important as the machines become faster and lighter. An efficient sensitivity analysis of residual vibration with respect to structural or operational parameters is indispensable for using a gradient based optimization algorithm, which reduces the residual vibration in either active or passive way. In this paper, an integrated quadratic performance index is used as the measure of the residual vibration, since it globally measures the residual vibration response and its calculation can be simplified greatly with Lyapunov equation. Several sensitivity analysis approaches for performance index were developed based on the assumption that the initial excitations of residual vibration were given and independent of structural design. Since the resulting excitations by the impact load often depend on structural design, this paper aims to propose a new efficient sensitivity analysis method for residual vibration of structures subject to impacts to consider the dependence. The new method is developed by combining two existing methods and using adjoint variable approach. Three numerical examples are carried out and demonstrate the accuracy of the proposed method. The numerical results show that the dependence of initial excitations on structural design variables may strongly affects the accuracy of sensitivities.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks.
Navarro Jimenez, M; Le Maître, O P; Knio, O M
2016-12-28
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-23
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes thatmore » the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. Here, a sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.« less
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
NASA Astrophysics Data System (ADS)
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-01
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
NASA Astrophysics Data System (ADS)
Wang, Qiqi; Rigas, Georgios; Esclapez, Lucas; Magri, Luca; Blonigan, Patrick
2016-11-01
Bluff body flows are of fundamental importance to many engineering applications involving massive flow separation and in particular the transport industry. Coherent flow structures emanating in the wake of three-dimensional bluff bodies, such as cars, trucks and lorries, are directly linked to increased aerodynamic drag, noise and structural fatigue. For low Reynolds laminar and transitional regimes, hydrodynamic stability theory has aided the understanding and prediction of the unstable dynamics. In the same framework, sensitivity analysis provides the means for efficient and optimal control, provided the unstable modes can be accurately predicted. However, these methodologies are limited to laminar regimes where only a few unstable modes manifest. Here we extend the stability analysis to low-dimensional chaotic regimes by computing the Lyapunov covariant vectors and their associated Lyapunov exponents. We compare them to eigenvectors and eigenvalues computed in traditional hydrodynamic stability analysis. Computing Lyapunov covariant vectors and Lyapunov exponents also enables the extension of sensitivity analysis to chaotic flows via the shadowing method. We compare the computed shadowing sensitivities to traditional sensitivity analysis. These Lyapunov based methodologies do not rely on mean flow assumptions, and are mathematically rigorous for calculating sensitivities of fully unsteady flow simulations.
NASA Technical Reports Server (NTRS)
McGhee, David S.; Peck, Jeff A.; McDonald, Emmett J.
2012-01-01
This paper examines Probabilistic Sensitivity Analysis (PSA) methods and tools in an effort to understand their utility in vehicle loads and dynamic analysis. Specifically, this study addresses how these methods may be used to establish limits on payload mass and cg location and requirements on adaptor stiffnesses while maintaining vehicle loads and frequencies within established bounds. To this end, PSA methods and tools are applied to a realistic, but manageable, integrated launch vehicle analysis where payload and payload adaptor parameters are modeled as random variables. This analysis is used to study both Regional Response PSA (RRPSA) and Global Response PSA (GRPSA) methods, with a primary focus on sampling based techniques. For contrast, some MPP based approaches are also examined.
A sensitive procedure is described for trace analysis of hydrogen peroxide in water. The process involves the peroxide-catalyzed oxidation of the leuco forms of two dyes, crystal violet and malachite green. The sensitivity of this procedure, as well as of another procedure based ...
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
NASA Astrophysics Data System (ADS)
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanova, T.; Laville, C.; Dyrda, J.
2012-07-01
The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplificationsmore » impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)« less
Gradient-Based Aerodynamic Shape Optimization Using ADI Method for Large-Scale Problems
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Baysal, Oktay
1997-01-01
A gradient-based shape optimization methodology, that is intended for practical three-dimensional aerodynamic applications, has been developed. It is based on the quasi-analytical sensitivities. The flow analysis is rendered by a fully implicit, finite volume formulation of the Euler equations.The aerodynamic sensitivity equation is solved using the alternating-direction-implicit (ADI) algorithm for memory efficiency. A flexible wing geometry model, that is based on surface parameterization and platform schedules, is utilized. The present methodology and its components have been tested via several comparisons. Initially, the flow analysis for for a wing is compared with those obtained using an unfactored, preconditioned conjugate gradient approach (PCG), and an extensively validated CFD code. Then, the sensitivities computed with the present method have been compared with those obtained using the finite-difference and the PCG approaches. Effects of grid refinement and convergence tolerance on the analysis and shape optimization have been explored. Finally the new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4. Despite the expected increase in the computational time, the results indicate that shape optimization, which require large numbers of grid points can be resolved with a gradient-based approach.
Jin, H; Yuan, L; Li, C; Kan, Y; Hao, R; Yang, J
2014-03-01
The purpose of this study was to systematically review and perform a meta-analysis of published data regarding the diagnostic performance of positron emission tomography (PET) or PET/computed tomography (PET/CT) in prosthetic infection after arthroplasty. A comprehensive computer literature search of studies published through May 31, 2012 regarding PET or PET/CT in patients suspicious of prosthetic infection was performed in PubMed/MEDLINE, Embase and Scopus databases. Pooled sensitivity and specificity of PET or PET/CT in patients suspicious of prosthetic infection on a per prosthesis-based analysis were calculated. The area under the receiver-operating characteristic (ROC) curve was calculated to measure the accuracy of PET or PET/CT in patients with suspicious of prosthetic infection. Fourteen studies comprising 838 prosthesis with suspicious of prosthetic infection after arthroplasty were included in this meta-analysis. The pooled sensitivity of PET or PET/CT in detecting prosthetic infection was 86% (95% confidence interval [CI] 82-90%) on a per prosthesis-based analysis. The pooled specificity of PET or PET/CT in detecting prosthetic infection was 86% (95% CI 83-89%) on a per prosthesis-based analysis. The area under the ROC curve was 0.93 on a per prosthesis-based analysis. In patients suspicious of prosthetic infection, FDG PET or PET/CT demonstrated high sensitivity and specificity. FDG PET or PET/CT are accurate methods in this setting. Nevertheless, possible sources of false positive results and influcing factors should kept in mind.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1992-01-01
Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.
Sensitivity of wildlife habitat models to uncertainties in GIS data
NASA Technical Reports Server (NTRS)
Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.
1992-01-01
Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.
Pujol-Vila, F; Vigués, N; Díaz-González, M; Muñoz-Berbel, X; Mas, J
2015-05-15
Global urban and industrial growth, with the associated environmental contamination, is promoting the development of rapid and inexpensive general toxicity methods. Current microbial methodologies for general toxicity determination rely on either bioluminescent bacteria and specific medium solution (i.e. Microtox(®)) or low sensitivity and diffusion limited protocols (i.e. amperometric microbial respirometry). In this work, fast and sensitive optical toxicity bioassay based on dual wavelength analysis of bacterial ferricyanide reduction kinetics is presented, using Escherichia coli as a bacterial model. Ferricyanide reduction kinetic analysis (variation of ferricyanide absorption with time), much more sensitive than single absorbance measurements, allowed for direct and fast toxicity determination without pre-incubation steps (assay time=10 min) and minimizing biomass interference. Dual wavelength analysis at 405 (ferricyanide and biomass) and 550 nm (biomass), allowed for ferricyanide monitoring without interference of biomass scattering. On the other hand, refractive index (RI) matching with saccharose reduced bacterial light scattering around 50%, expanding the analytical linear range in the determination of absorbent molecules. With this method, different toxicants such as metals and organic compounds were analyzed with good sensitivities. Half maximal effective concentrations (EC50) obtained after 10 min bioassay, 2.9, 1.0, 0.7 and 18.3 mg L(-1) for copper, zinc, acetic acid and 2-phenylethanol respectively, were in agreement with previously reported values for longer bioassays (around 60 min). This method represents a promising alternative for fast and sensitive water toxicity monitoring, opening the possibility of quick in situ analysis. Copyright © 2014 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estep, Donald
2015-11-30
This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
2017-01-31
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less
Hoffmann, Max J; Engelmann, Felix; Matera, Sebastian
2017-01-28
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO 2 (110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less
NASA Astrophysics Data System (ADS)
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
2017-01-01
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.
Analysis of DNA methylation in Arabidopsis thaliana based on methylation-sensitive AFLP markers.
Cervera, M T; Ruiz-García, L; Martínez-Zapater, J M
2002-12-01
AFLP analysis using restriction enzyme isoschizomers that differ in their sensitivity to methylation of their recognition sites has been used to analyse the methylation state of anonymous CCGG sequences in Arabidopsis thaliana. The technique was modified to improve the quality of fingerprints and to visualise larger numbers of scorable fragments. Sequencing of amplified fragments indicated that detection was generally associated with non-methylation of the cytosine to which the isoschizomer is sensitive. Comparison of EcoRI/ HpaII and EcoRI/ MspI patterns in different ecotypes revealed that 35-43% of CCGG sites were differentially digested by the isoschizomers. Interestingly, the pattern of digestion among different plants belonging to the same ecotype is highly conserved, with the rate of intra-ecotype methylation-sensitive polymorphisms being less than 1%. However, pairwise comparisons of methylation patterns between samples belonging to different ecotypes revealed differences in up to 34% of the methylation-sensitive polymorphisms. The lack of correlation between inter-ecotype similarity matrices based on methylation-insensitive or methylation-sensitive polymorphisms suggests that whatever the mechanisms regulating methylation may be, they are not related to nucleotide sequence variation.
Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.
2016-01-01
An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.
A sensitivity analysis for a thermomechanical model of the Antarctic ice sheet and ice shelves
NASA Astrophysics Data System (ADS)
Baratelli, F.; Castellani, G.; Vassena, C.; Giudici, M.
2012-04-01
The outcomes of an ice sheet model depend on a number of parameters and physical quantities which are often estimated with large uncertainty, because of lack of sufficient experimental measurements in such remote environments. Therefore, the efforts to improve the accuracy of the predictions of ice sheet models by including more physical processes and interactions with atmosphere, hydrosphere and lithosphere can be affected by the inaccuracy of the fundamental input data. A sensitivity analysis can help to understand which are the input data that most affect the different predictions of the model. In this context, a finite difference thermomechanical ice sheet model based on the Shallow-Ice Approximation (SIA) and on the Shallow-Shelf Approximation (SSA) has been developed and applied for the simulation of the evolution of the Antarctic ice sheet and ice shelves for the last 200 000 years. The sensitivity analysis of the model outcomes (e.g., the volume of the ice sheet and of the ice shelves, the basal melt rate of the ice sheet, the mean velocity of the Ross and Ronne-Filchner ice shelves, the wet area at the base of the ice sheet) with respect to the model parameters (e.g., the basal sliding coefficient, the geothermal heat flux, the present-day surface accumulation and temperature, the mean ice shelves viscosity, the melt rate at the base of the ice shelves) has been performed by computing three synthetic numerical indices: two local sensitivity indices and a global sensitivity index. Local sensitivity indices imply a linearization of the model and neglect both non-linear and joint effects of the parameters. The global variance-based sensitivity index, instead, takes into account the complete variability of the input parameters but is usually conducted with a Monte Carlo approach which is computationally very demanding for non-linear complex models. Therefore, the global sensitivity index has been computed using a development of the model outputs in a neighborhood of the reference parameter values with a second-order approximation. The comparison of the three sensitivity indices proved that the approximation of the non-linear model with a second-order expansion is sufficient to show some differences between the local and the global indices. As a general result, the sensitivity analysis showed that most of the model outcomes are mainly sensitive to the present-day surface temperature and accumulation, which, in principle, can be measured more easily (e.g., with remote sensing techniques) than the other input parameters considered. On the other hand, the parameters to which the model resulted less sensitive are the basal sliding coefficient and the mean ice shelves viscosity.
Amiryousefi, Mohammad Reza; Mohebbi, Mohebbat; Khodaiyan, Faramarz
2014-01-01
The objectives of this study were to use image analysis and artificial neural network (ANN) to predict mass transfer kinetics as well as color changes and shrinkage of deep-fat fried ostrich meat cubes. Two generalized feedforward networks were separately developed by using the operation conditions as inputs. Results based on the highest numerical quantities of the correlation coefficients between the experimental versus predicted values, showed proper fitting. Sensitivity analysis results of selected ANNs showed that among the input variables, frying temperature was the most sensitive to moisture content (MC) and fat content (FC) compared to other variables. Sensitivity analysis results of selected ANNs showed that MC and FC were the most sensitive to frying temperature compared to other input variables. Similarly, for the second ANN architecture, microwave power density was the most impressive variable having the maximum influence on both shrinkage percentage and color changes. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
2001-01-01
This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin
2015-04-01
Earth and Environmental Systems (EES) models are essential components of research, development, and decision-making in science and engineering disciplines. With continuous advances in understanding and computing power, such models are becoming more complex with increasingly more factors to be specified (model parameters, forcings, boundary conditions, etc.). To facilitate better understanding of the role and importance of different factors in producing the model responses, the procedure known as 'Sensitivity Analysis' (SA) can be very helpful. Despite the availability of a large body of literature on the development and application of various SA approaches, two issues continue to pose major challenges: (1) Ambiguous Definition of Sensitivity - Different SA methods are based in different philosophies and theoretical definitions of sensitivity, and can result in different, even conflicting, assessments of the underlying sensitivities for a given problem, (2) Computational Cost - The cost of carrying out SA can be large, even excessive, for high-dimensional problems and/or computationally intensive models. In this presentation, we propose a new approach to sensitivity analysis that addresses the dual aspects of 'effectiveness' and 'efficiency'. By effective, we mean achieving an assessment that is both meaningful and clearly reflective of the objective of the analysis (the first challenge above), while by efficiency we mean achieving statistically robust results with minimal computational cost (the second challenge above). Based on this approach, we develop a 'global' sensitivity analysis framework that efficiently generates a newly-defined set of sensitivity indices that characterize a range of important properties of metric 'response surfaces' encountered when performing SA on EES models. Further, we show how this framework embraces, and is consistent with, a spectrum of different concepts regarding 'sensitivity', and that commonly-used SA approaches (e.g., Sobol, Morris, etc.) are actually limiting cases of our approach under specific conditions. Multiple case studies are used to demonstrate the value of the new framework. The results show that the new framework provides a fundamental understanding of the underlying sensitivities for any given problem, while requiring orders of magnitude fewer model runs.
Sensitivity Analysis of Biome-Bgc Model for Dry Tropical Forests of Vindhyan Highlands, India
NASA Astrophysics Data System (ADS)
Kumar, M.; Raghubanshi, A. S.
2011-08-01
A process-based model BIOME-BGC was run for sensitivity analysis to see the effect of ecophysiological parameters on net primary production (NPP) of dry tropical forest of India. The sensitivity test reveals that the forest NPP was highly sensitive to the following ecophysiological parameters: Canopy light extinction coefficient (k), Canopy average specific leaf area (SLA), New stem C : New leaf C (SC:LC), Maximum stomatal conductance (gs,max), C:N of fine roots (C:Nfr), All-sided to projected leaf area ratio and Canopy water interception coefficient (Wint). Therefore, these parameters need more precision and attention during estimation and observation in the field studies.
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-01-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.
Global Sensitivity Analysis for Process Identification under Model Uncertainty
NASA Astrophysics Data System (ADS)
Ye, M.; Dai, H.; Walker, A. P.; Shi, L.; Yang, J.
2015-12-01
The environmental system consists of various physical, chemical, and biological processes, and environmental models are always built to simulate these processes and their interactions. For model building, improvement, and validation, it is necessary to identify important processes so that limited resources can be used to better characterize the processes. While global sensitivity analysis has been widely used to identify important processes, the process identification is always based on deterministic process conceptualization that uses a single model for representing a process. However, environmental systems are complex, and it happens often that a single process may be simulated by multiple alternative models. Ignoring the model uncertainty in process identification may lead to biased identification in that identified important processes may not be so in the real world. This study addresses this problem by developing a new method of global sensitivity analysis for process identification. The new method is based on the concept of Sobol sensitivity analysis and model averaging. Similar to the Sobol sensitivity analysis to identify important parameters, our new method evaluates variance change when a process is fixed at its different conceptualizations. The variance considers both parametric and model uncertainty using the method of model averaging. The method is demonstrated using a synthetic study of groundwater modeling that considers recharge process and parameterization process. Each process has two alternative models. Important processes of groundwater flow and transport are evaluated using our new method. The method is mathematically general, and can be applied to a wide range of environmental problems.
Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw
2017-01-01
Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare . However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop plants with large and complex genomes.
Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw
2017-01-01
Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare. However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop plants with large and complex genomes. PMID:29250096
A fiber-optic water flow sensor based on laser-heated silicon Fabry-Pérot cavity
NASA Astrophysics Data System (ADS)
Liu, Guigen; Sheng, Qiwen; Resende Lisboa Piassetta, Geraldo; Hou, Weilin; Han, Ming
2016-05-01
A hot-wire fiber-optic water flow sensor based on laser-heated silicon Fabry-Pérot interferometer (FPI) has been proposed and demonstrated in this paper. The operation of the sensor is based on the convective heat loss to water from a heated silicon FPI attached to the cleaved enface of a piece of single-mode fiber. The flow-induced change in the temperature is demodulated by the spectral shifts of the reflection fringes. An analytical model based on the FPI theory and heat transfer analysis has been developed for performance analysis. Numerical simulations based on finite element analysis have been conducted. The analytical and numerical results agree with each other in predicting the behavior of the sensor. Experiments have also been carried to demonstrate the sensing principle and verify the theoretical analysis. Investigations suggest that the sensitivity at low flow rates are much larger than that at high flow rates and the sensitivity can be easily improved by increasing the heating laser power. Experimental results show that an average sensitivity of 52.4 nm/(m/s) for the flow speed range of 1.5 mm/s to 12 mm/s was obtained with a heating power of ~12 mW, suggesting a resolution of ~1 μm/s assuming a wavelength resolution of 0.05 pm.
NASA Astrophysics Data System (ADS)
Kalinkina, M. E.; Kozlov, A. S.; Labkovskaia, R. I.; Pirozhnikova, O. I.; Tkalich, V. L.; Shmakov, N. A.
2018-05-01
The object of research is the element base of devices of control and automation systems, including in its composition annular elastic sensitive elements, methods of their modeling, calculation algorithms and software complexes for automation of their design processes. The article is devoted to the development of the computer-aided design system of elastic sensitive elements used in weight- and force-measuring automation devices. Based on the mathematical modeling of deformation processes in a solid, as well as the results of static and dynamic analysis, the calculation of elastic elements is given using the capabilities of modern software systems based on numerical simulation. In the course of the simulation, the model was a divided hexagonal grid of finite elements with a maximum size not exceeding 2.5 mm. The results of modal and dynamic analysis are presented in this article.
NASA Astrophysics Data System (ADS)
Kang, Dong-Keun; Kim, Chang-Wan; Yang, Hyun-Ik
2017-01-01
In the present study we carried out a dynamic analysis of a CNT-based mass sensor by using a finite element method (FEM)-based nonlinear analysis model of the CNT resonator to elucidate the combined effects of thermal effects and nonlinear oscillation behavior upon the overall mass detection sensitivity. Mass sensors using carbon nanotube (CNT) resonators provide very high sensing performance. Because CNT-based resonators can have high aspect ratios, they can easily exhibit nonlinear oscillation behavior due to large displacements. Also, CNT-based devices may experience high temperatures during their manufacture and operation. These geometrical nonlinearities and temperature changes affect the sensing performance of CNT-based mass sensors. However, it is very hard to find previous literature addressing the detection sensitivity of CNT-based mass sensors including considerations of both these nonlinear behaviors and thermal effects. We modeled the nonlinear equation of motion by using the von Karman nonlinear strain-displacement relation, taking into account the additional axial force associated with the thermal effect. The FEM was employed to solve the nonlinear equation of motion because it can effortlessly handle the more complex geometries and boundary conditions. A doubly clamped CNT resonator actuated by distributed electrostatic force was the configuration subjected to the numerical experiments. Thermal effects upon the fundamental resonance behavior and the shift of resonance frequency due to attached mass, i.e., the mass detection sensitivity, were examined in environments of both high and low (or room) temperature. The fundamental resonance frequency increased with decreasing temperature in the high temperature environment, and increased with increasing temperature in the low temperature environment. The magnitude of the shift in resonance frequency caused by an attached mass represents the sensing performance of a mass sensor, i.e., its mass detection sensitivity, and it can be seen that this shift is affected by the temperature change and the amount of electrostatic force. The thermal effects on the mass detection sensitivity are intensified in the linear oscillation regime and increase with increasing CNT length; this intensification can either improve or worsen the detection sensitivity.
Wu, Yiping; Liu, Shuguang; Huang, Zhihong; Yan, Wende
2014-01-01
Ecosystem models are useful tools for understanding ecological processes and for sustainable management of resources. In biogeochemical field, numerical models have been widely used for investigating carbon dynamics under global changes from site to regional and global scales. However, it is still challenging to optimize parameters and estimate parameterization uncertainty for complex process-based models such as the Erosion Deposition Carbon Model (EDCM), a modified version of CENTURY, that consider carbon, water, and nutrient cycles of ecosystems. This study was designed to conduct the parameter identifiability, optimization, sensitivity, and uncertainty analysis of EDCM using our developed EDCM-Auto, which incorporated a comprehensive R package—Flexible Modeling Framework (FME) and the Shuffled Complex Evolution (SCE) algorithm. Using a forest flux tower site as a case study, we implemented a comprehensive modeling analysis involving nine parameters and four target variables (carbon and water fluxes) with their corresponding measurements based on the eddy covariance technique. The local sensitivity analysis shows that the plant production-related parameters (e.g., PPDF1 and PRDX) are most sensitive to the model cost function. Both SCE and FME are comparable and performed well in deriving the optimal parameter set with satisfactory simulations of target variables. Global sensitivity and uncertainty analysis indicate that the parameter uncertainty and the resulting output uncertainty can be quantified, and that the magnitude of parameter-uncertainty effects depends on variables and seasons. This study also demonstrates that using the cutting-edge R functions such as FME can be feasible and attractive for conducting comprehensive parameter analysis for ecosystem modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Chen, Xingyuan; Ye, Ming
Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level ofmore » the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.« less
NASA Astrophysics Data System (ADS)
Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.
2017-12-01
Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.
Zhang, Xiao-Chao; Wei, Zhen-Wei; Gong, Xiao-Yun; Si, Xing-Yu; Zhao, Yao-Yao; Yang, Cheng-Dui; Zhang, Si-Chun; Zhang, Xin-Rong
2016-04-29
Integrating droplet-based microfluidics with mass spectrometry is essential to high-throughput and multiple analysis of single cells. Nevertheless, matrix effects such as the interference of culture medium and intracellular components influence the sensitivity and the accuracy of results in single-cell analysis. To resolve this problem, we developed a method that integrated droplet-based microextraction with single-cell mass spectrometry. Specific extraction solvent was used to selectively obtain intracellular components of interest and remove interference of other components. Using this method, UDP-Glc-NAc, GSH, GSSG, AMP, ADP and ATP were successfully detected in single MCF-7 cells. We also applied the method to study the change of unicellular metabolites in the biological process of dysfunctional oxidative phosphorylation. The method could not only realize matrix-free, selective and sensitive detection of metabolites in single cells, but also have the capability for reliable and high-throughput single-cell analysis.
Automatic network coupling analysis for dynamical systems based on detailed kinetic models.
Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich
2005-10-01
We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.
Benchmark On Sensitivity Calculation (Phase III)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanova, Tatiana; Laville, Cedric; Dyrda, James
2012-01-01
The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impactmore » the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.« less
Sensitivity-Based Guided Model Calibration
NASA Astrophysics Data System (ADS)
Semnani, M.; Asadzadeh, M.
2017-12-01
A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.
Novel Image Encryption Scheme Based on Chebyshev Polynomial and Duffing Map
2014-01-01
We present a novel image encryption algorithm using Chebyshev polynomial based on permutation and substitution and Duffing map based on substitution. Comprehensive security analysis has been performed on the designed scheme using key space analysis, visual testing, histogram analysis, information entropy calculation, correlation coefficient analysis, differential analysis, key sensitivity test, and speed test. The study demonstrates that the proposed image encryption algorithm shows advantages of more than 10113 key space and desirable level of security based on the good statistical results and theoretical arguments. PMID:25143970
Fujarewicz, Krzysztof; Lakomiec, Krzysztof
2016-12-01
We investigate a spatial model of growth of a tumor and its sensitivity to radiotherapy. It is assumed that the radiation dose may vary in time and space, like in intensity modulated radiotherapy (IMRT). The change of the final state of the tumor depends on local differences in the radiation dose and varies with the time and the place of these local changes. This leads to the concept of a tumor's spatiotemporal sensitivity to radiation, which is a function of time and space. We show how adjoint sensitivity analysis may be applied to calculate the spatiotemporal sensitivity of the finite difference scheme resulting from the partial differential equation describing the tumor growth. We demonstrate results of this approach to the tumor proliferation, invasion and response to radiotherapy (PIRT) model and we compare the accuracy and the computational effort of the method to the simple forward finite difference sensitivity analysis. Furthermore, we use the spatiotemporal sensitivity during the gradient-based optimization of the spatiotemporal radiation protocol and present results for different parameters of the model.
Efficient sensitivity analysis method for chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Liao, Haitao
2016-05-01
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
Rico, Andreu; Van den Brink, Paul J
2015-08-01
In the present study, the authors evaluated the vulnerability of aquatic invertebrates to insecticides based on their intrinsic sensitivity and their population-level recovery potential. The relative sensitivity of invertebrates to 5 different classes of insecticides was calculated at the genus, family, and order levels using the acute toxicity data available in the US Environmental Protection Agency ECOTOX database. Biological trait information was linked to the calculated relative sensitivity to evaluate correlations between traits and sensitivity and to calculate a vulnerability index, which combines intrinsic sensitivity and traits describing the recovery potential of populations partially exposed to insecticides (e.g., voltinism, flying strength, occurrence in drift). The analysis shows that the relative sensitivity of arthropods depends on the insecticide mode of action. Traits such as degree of sclerotization, size, and respiration type showed good correlation to sensitivity and can be used to make predictions for invertebrate taxa without a priori sensitivity knowledge. The vulnerability analysis revealed that some of the Ephemeroptera, Plecoptera, and Trichoptera taxa were vulnerable to all insecticide classes and indicated that particular gastropod and bivalve species were potentially vulnerable. Microcrustaceans (e.g., daphnids, copepods) showed low potential vulnerability, particularly in lentic ecosystems. The methods described in the present study can be used for the selection of focal species to be included as part of ecological scenarios and higher tier risk assessments. © 2015 SETAC.
Sensitivity analysis of periodic errors in heterodyne interferometry
NASA Astrophysics Data System (ADS)
Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony
2011-03-01
Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Georgios, E-mail: garab@math.uoc.gr; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003; Katsoulakis, Markos A., E-mail: markos@math.umass.edu
2014-03-28
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that themore » new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.« less
A Small Range Six-Axis Accelerometer Designed with High Sensitivity DCB Elastic Element
Sun, Zhibo; Liu, Jinhao; Yu, Chunzhan; Zheng, Yili
2016-01-01
This paper describes a small range six-axis accelerometer (the measurement range of the sensor is ±g) with high sensitivity DCB (Double Cantilever Beam) elastic element. This sensor is developed based on a parallel mechanism because of the reliability. The accuracy of sensors is affected by its sensitivity characteristics. To improve the sensitivity, a DCB structure is applied as the elastic element. Through dynamic analysis, the dynamic model of the accelerometer is established using the Lagrange equation, and the mass matrix and stiffness matrix are obtained by a partial derivative calculation and a conservative congruence transformation, respectively. By simplifying the structure of the accelerometer, a model of the free vibration is achieved, and the parameters of the sensor are designed based on the model. Through stiffness analysis of the DCB structure, the deflection curve of the beam is calculated. Compared with the result obtained using a finite element analysis simulation in ANSYS Workbench, the coincidence rate of the maximum deflection is 89.0% along the x-axis, 88.3% along the y-axis and 87.5% along the z-axis. Through strain analysis of the DCB elastic element, the sensitivity of the beam is obtained. According to the experimental result, the accuracy of the theoretical analysis is found to be 90.4% along the x-axis, 74.9% along the y-axis and 78.9% along the z-axis. The measurement errors of linear accelerations ax, ay and az in the experiments are 2.6%, 0.6% and 1.31%, respectively. The experiments prove that accelerometer with DCB elastic element performs great sensitive and precision characteristics. PMID:27657089
Sensitivity analysis of Repast computational ecology models with R/Repast.
Prestes García, Antonio; Rodríguez-Patón, Alfonso
2016-12-01
Computational ecology is an emerging interdisciplinary discipline founded mainly on modeling and simulation methods for studying ecological systems. Among the existing modeling formalisms, the individual-based modeling is particularly well suited for capturing the complex temporal and spatial dynamics as well as the nonlinearities arising in ecosystems, communities, or populations due to individual variability. In addition, being a bottom-up approach, it is useful for providing new insights on the local mechanisms which are generating some observed global dynamics. Of course, no conclusions about model results could be taken seriously if they are based on a single model execution and they are not analyzed carefully. Therefore, a sound methodology should always be used for underpinning the interpretation of model results. The sensitivity analysis is a methodology for quantitatively assessing the effect of input uncertainty in the simulation output which should be incorporated compulsorily to every work based on in-silico experimental setup. In this article, we present R/Repast a GNU R package for running and analyzing Repast Simphony models accompanied by two worked examples on how to perform global sensitivity analysis and how to interpret the results.
The dream of a one-stop-shop: Meta-analysis on myocardial perfusion CT.
Pelgrim, Gert Jan; Dorrius, Monique; Xie, Xueqian; den Dekker, Martijn A M; Schoepf, U Joseph; Henzler, Thomas; Oudkerk, Matthijs; Vliegenthart, Rozemarijn
2015-12-01
To determine the diagnostic performance of computed tomography (CT) perfusion techniques for the detection of functionally relevant coronary artery disease (CAD) in comparison to reference standards, including invasive coronary angiography (ICA), single photon emission computed tomography (SPECT), and magnetic resonance imaging (MRI). PubMed, Web of Knowledge and Embase were searched from January 1, 1998 until July 1, 2014. The search yielded 9475 articles. After duplicate removal, 6041 were screened on title and abstract. The resulting 276 articles were independently analyzed in full-text by two reviewers, and included if the inclusion criteria were met. The articles reporting diagnostic parameters including true positive, true negative, false positive and false negative were subsequently evaluated for the meta-analysis. Results were pooled according to CT perfusion technique, namely snapshot techniques: single-phase rest, single-phase stress, single-phase dual-energy stress and combined coronary CT angiography [rest] and single-phase stress, as well the dynamic technique: dynamic stress CT perfusion. Twenty-two articles were included in the meta-analysis (1507 subjects). Pooled per-patient sensitivity and specificity of single-phase rest CT compared to rest SPECT were 89% (95% confidence interval [CI], 82-94%) and 88% (95% CI, 78-94%), respectively. Vessel-based sensitivity and specificity of single-phase stress CT compared to ICA-based >70% stenosis were 82% (95% CI, 64-92%) and 78% (95% CI, 61-89%). Segment-based sensitivity and specificity of single-phase dual-energy stress CT in comparison to stress MRI were 75% (95% CI, 60-85%) and 95% (95% CI, 80-99%). Segment-based sensitivity and specificity of dynamic stress CT perfusion compared to stress SPECT were 77% (95% CI, 67-85) and 89% (95% CI, 78-95%). For combined coronary CT angiography and single-phase stress CT, vessel-based sensitivity and specificity in comparison to ICA-based >50% stenosis were 84% (95% CI, 67-93%) and 93% (95% CI, 89-96%). This meta-analysis shows considerable variation in techniques and reference standards for CT of myocardial blood supply. While CT seems sensitive and specific for evaluation of hemodynamically relevant CAD, studies so far are limited in size. Standardization of myocardial perfusion CT technique is essential. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Motif-based analysis of large nucleotide data sets using MEME-ChIP
Ma, Wenxiu; Noble, William S; Bailey, Timothy L
2014-01-01
MEME-ChIP is a web-based tool for analyzing motifs in large DNA or RNA data sets. It can analyze peak regions identified by ChIP-seq, cross-linking sites identified by cLIP-seq and related assays, as well as sets of genomic regions selected using other criteria. MEME-ChIP performs de novo motif discovery, motif enrichment analysis, motif location analysis and motif clustering, providing a comprehensive picture of the DNA or RNA motifs that are enriched in the input sequences. MEME-ChIP performs two complementary types of de novo motif discovery: weight matrix–based discovery for high accuracy; and word-based discovery for high sensitivity. Motif enrichment analysis using DNA or RNA motifs from human, mouse, worm, fly and other model organisms provides even greater sensitivity. MEME-ChIP’s interactive HTML output groups and aligns significant motifs to ease interpretation. this protocol takes less than 3 h, and it provides motif discovery approaches that are distinct and complementary to other online methods. PMID:24853928
NASA Astrophysics Data System (ADS)
Tavakoli, S.; Poslad, S.; Fruhwirth, R.; Winter, M.
2012-04-01
This paper introduces an application of a novel EventTracker platform for instantaneous Sensitivity Analysis (SA) of large scale real-time geo-information. Earth disaster management systems demand high quality information to aid a quick and timely response to their evolving environments. The idea behind the proposed EventTracker platform is the assumption that modern information management systems are able to capture data in real-time and have the technological flexibility to adjust their services to work with specific sources of data/information. However, to assure this adaptation in real time, the online data should be collected, interpreted, and translated into corrective actions in a concise and timely manner. This can hardly be handled by existing sensitivity analysis methods because they rely on historical data and lazy processing algorithms. In event-driven systems, the effect of system inputs on its state is of value, as events could cause this state to change. This 'event triggering' situation underpins the logic of the proposed approach. Event tracking sensitivity analysis method describes the system variables and states as a collection of events. The higher the occurrence of an input variable during the trigger of event, the greater its potential impact will be on the final analysis of the system state. Experiments were designed to compare the proposed event tracking sensitivity analysis with existing Entropy-based sensitivity analysis methods. The results have shown a 10% improvement in a computational efficiency with no compromise for accuracy. It has also shown that the computational time to perform the sensitivity analysis is 0.5% of the time required compared to using the Entropy-based method. The proposed method has been applied to real world data in the context of preventing emerging crises at drilling rigs. One of the major purposes of such rigs is to drill boreholes to explore oil or gas reservoirs with the final scope of recovering the content of such reservoirs; both in onshore regions as well as in offshore regions. Drilling a well is always guided by technical, economic and security constraints to prevent crew, equipment and environment from injury, damage and pollution. Although risk assessment and local practice provides a high degree of security, uncertainty is given by the behaviour of the formation which may cause crucial situations at the rig. To overcome such uncertainties real-time sensor measurements form a base to predict and thus prevent such crises, the proposed method supports the identification of the data necessary for that.
Numerical modeling and performance analysis of zinc oxide (ZnO) thin-film based gas sensor
NASA Astrophysics Data System (ADS)
Punetha, Deepak; Ranjan, Rashmi; Pandey, Saurabh Kumar
2018-05-01
This manuscript describes the modeling and analysis of Zinc Oxide thin film based gas sensor. The conductance and sensitivity of the sensing layer has been described by change in temperature as well as change in gas concentration. The analysis has been done for reducing and oxidizing agents. Simulation results revealed the change in resistance and sensitivity of the sensor with respect to temperature and different gas concentration. To check the feasibility of the model, all the simulated results have been analyze by different experimental reported work. Wolkenstein theory has been used to model the proposed sensor and the simulation results have been shown by using device simulation software.
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang
Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).
Bernstein, Joshua G.W.; Mehraei, Golbarg; Shamma, Shihab; Gallun, Frederick J.; Theodoroff, Sarah M.; Leek, Marjorie R.
2014-01-01
Background A model that can accurately predict speech intelligibility for a given hearing-impaired (HI) listener would be an important tool for hearing-aid fitting or hearing-aid algorithm development. Existing speech-intelligibility models do not incorporate variability in suprathreshold deficits that are not well predicted by classical audiometric measures. One possible approach to the incorporation of such deficits is to base intelligibility predictions on sensitivity to simultaneously spectrally and temporally modulated signals. Purpose The likelihood of success of this approach was evaluated by comparing estimates of spectrotemporal modulation (STM) sensitivity to speech intelligibility and to psychoacoustic estimates of frequency selectivity and temporal fine-structure (TFS) sensitivity across a group of HI listeners. Research Design The minimum modulation depth required to detect STM applied to an 86 dB SPL four-octave noise carrier was measured for combinations of temporal modulation rate (4, 12, or 32 Hz) and spectral modulation density (0.5, 1, 2, or 4 cycles/octave). STM sensitivity estimates for individual HI listeners were compared to estimates of frequency selectivity (measured using the notched-noise method at 500, 1000measured using the notched-noise method at 500, 2000, and 4000 Hz), TFS processing ability (2 Hz frequency-modulation detection thresholds for 500, 10002 Hz frequency-modulation detection thresholds for 500, 2000, and 4000 Hz carriers) and sentence intelligibility in noise (at a 0 dB signal-to-noise ratio) that were measured for the same listeners in a separate study. Study Sample Eight normal-hearing (NH) listeners and 12 listeners with a diagnosis of bilateral sensorineural hearing loss participated. Data Collection and Analysis STM sensitivity was compared between NH and HI listener groups using a repeated-measures analysis of variance. A stepwise regression analysis compared STM sensitivity for individual HI listeners to audiometric thresholds, age, and measures of frequency selectivity and TFS processing ability. A second stepwise regression analysis compared speech intelligibility to STM sensitivity and the audiogram-based Speech Intelligibility Index. Results STM detection thresholds were elevated for the HI listeners, but only for low rates and high densities. STM sensitivity for individual HI listeners was well predicted by a combination of estimates of frequency selectivity at 4000 Hz and TFS sensitivity at 500 Hz but was unrelated to audiometric thresholds. STM sensitivity accounted for an additional 40% of the variance in speech intelligibility beyond the 40% accounted for by the audibility-based Speech Intelligibility Index. Conclusions Impaired STM sensitivity likely results from a combination of a reduced ability to resolve spectral peaks and a reduced ability to use TFS information to follow spectral-peak movements. Combining STM sensitivity estimates with audiometric threshold measures for individual HI listeners provided a more accurate prediction of speech intelligibility than audiometric measures alone. These results suggest a significant likelihood of success for an STM-based model of speech intelligibility for HI listeners. PMID:23636210
Thermodynamics-based Metabolite Sensitivity Analysis in metabolic networks.
Kiparissides, A; Hatzimanikatis, V
2017-01-01
The increasing availability of large metabolomics datasets enhances the need for computational methodologies that can organize the data in a way that can lead to the inference of meaningful relationships. Knowledge of the metabolic state of a cell and how it responds to various stimuli and extracellular conditions can offer significant insight in the regulatory functions and how to manipulate them. Constraint based methods, such as Flux Balance Analysis (FBA) and Thermodynamics-based flux analysis (TFA), are commonly used to estimate the flow of metabolites through genome-wide metabolic networks, making it possible to identify the ranges of flux values that are consistent with the studied physiological and thermodynamic conditions. However, unless key intracellular fluxes and metabolite concentrations are known, constraint-based models lead to underdetermined problem formulations. This lack of information propagates as uncertainty in the estimation of fluxes and basic reaction properties such as the determination of reaction directionalities. Therefore, knowledge of which metabolites, if measured, would contribute the most to reducing this uncertainty can significantly improve our ability to define the internal state of the cell. In the present work we combine constraint based modeling, Design of Experiments (DoE) and Global Sensitivity Analysis (GSA) into the Thermodynamics-based Metabolite Sensitivity Analysis (TMSA) method. TMSA ranks metabolites comprising a metabolic network based on their ability to constrain the gamut of possible solutions to a limited, thermodynamically consistent set of internal states. TMSA is modular and can be applied to a single reaction, a metabolic pathway or an entire metabolic network. This is, to our knowledge, the first attempt to use metabolic modeling in order to provide a significance ranking of metabolites to guide experimental measurements. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
2015-03-16
shaded region around each total sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity...Performance We conducted a global sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the...Hunter, J.D. Matplotlib: A 2D Graphics Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear
Meta-analysis of the relative sensitivity of semi-natural vegetation species to ozone.
Hayes, F; Jones, M L M; Mills, G; Ashmore, M
2007-04-01
This study identified 83 species from existing publications suitable for inclusion in a database of sensitivity of species to ozone (OZOVEG database). An index, the relative sensitivity to ozone, was calculated for each species based on changes in biomass in order to test for species traits associated with ozone sensitivity. Meta-analysis of the ozone sensitivity data showed a wide inter-specific range in response to ozone. Some relationships in comparison to plant physiological and ecological characteristics were identified. Plants of the therophyte lifeform were particularly sensitive to ozone. Species with higher mature leaf N concentration were more sensitive to ozone than those with lower leaf N concentration. Some relationships between relative sensitivity to ozone and Ellenberg habitat requirements were also identified. In contrast, no relationships between relative sensitivity to ozone and mature leaf P concentration, Grime's CSR strategy, leaf longevity, flowering season, stomatal density and maximum altitude were found. The relative sensitivity of species and relationships with plant characteristics identified in this study could be used to predict sensitivity to ozone of untested species and communities.
Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?
NASA Technical Reports Server (NTRS)
Moore, Greg; Chainyk, Mike; Schiermeier, John
2004-01-01
The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.
Results of an integrated structure-control law design sensitivity analysis
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1988-01-01
Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.
NASA Astrophysics Data System (ADS)
Sakamoto, Yuri; Uemura, Kohei; Ikuta, Takashi; Maehashi, Kenzo
2018-04-01
We have succeeded in fabricating a hydrogen gas sensor based on palladium-modified graphene field-effect transistors (FETs). The negative-voltage shift in the transfer characteristics was observed with exposure to hydrogen gas, which was explained by the change in work function. The hydrogen concentration dependence of the voltage shift was investigated using graphene FETs with palladium deposited by three different evaporation processes. The results indicate that the hydrogen detection sensitivity of the palladium-modified graphene FETs is strongly dependent on the palladium configuration. Therefore, the palladium-modified graphene FET is a candidate for breath analysis.
PC-BASED SUPERCOMPUTING FOR UNCERTAINTY AND SENSITIVITY ANALYSIS OF MODELS
Evaluating uncertainty and sensitivity of multimedia environmental models that integrate assessments of air, soil, sediments, groundwater, and surface water is a difficult task. It can be an enormous undertaking even for simple, single-medium models (i.e. groundwater only) descr...
NASA Astrophysics Data System (ADS)
Yin, X.; Chen, G.; Li, W.; Huthchins, D. A.
2013-01-01
Previous work indicated that the capacitive imaging (CI) technique is a useful NDE tool which can be used on a wide range of materials, including metals, glass/carbon fibre composite materials and concrete. The imaging performance of the CI technique for a given application is determined by design parameters and characteristics of the CI probe. In this paper, a rapid method for calculating the whole probe sensitivity distribution based on the finite element model (FEM) is presented to provide a direct view of the imaging capabilities of the planar CI probe. Sensitivity distributions of CI probes with different geometries were obtained. Influencing factors on sensitivity distribution were studied. Comparisons between CI probes with point-to-point triangular electrode pair and back-to-back triangular electrode pair were made based on the analysis of the corresponding sensitivity distributions. The results indicated that the sensitivity distribution could be useful for optimising the probe design parameters and predicting the imaging performance.
Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)
1996-01-01
Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
SEP thrust subsystem performance sensitivity analysis
NASA Technical Reports Server (NTRS)
Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.
1973-01-01
This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.
Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A
2011-01-01
The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less
Time Series Analysis Based on Running Mann Whitney Z Statistics
USDA-ARS?s Scientific Manuscript database
A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...
NASA Astrophysics Data System (ADS)
Khorashadi Zadeh, Farkhondeh; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy
2017-04-01
Parameter estimation is a major concern in hydrological modeling, which may limit the use of complex simulators with a large number of parameters. To support the selection of parameters to include in or exclude from the calibration process, Global Sensitivity Analysis (GSA) is widely applied in modeling practices. Based on the results of GSA, the influential and the non-influential parameters are identified (i.e. parameters screening). Nevertheless, the choice of the screening threshold below which parameters are considered non-influential is a critical issue, which has recently received more attention in GSA literature. In theory, the sensitivity index of a non-influential parameter has a value of zero. However, since numerical approximations, rather than analytical solutions, are utilized in GSA methods to calculate the sensitivity indices, small but non-zero indices may be obtained for the indices of non-influential parameters. In order to assess the threshold that identifies non-influential parameters in GSA methods, we propose to calculate the sensitivity index of a "dummy parameter". This dummy parameter has no influence on the model output, but will have a non-zero sensitivity index, representing the error due to the numerical approximation. Hence, the parameters whose indices are above the sensitivity index of the dummy parameter can be classified as influential, whereas the parameters whose indices are below this index are within the range of the numerical error and should be considered as non-influential. To demonstrated the effectiveness of the proposed "dummy parameter approach", 26 parameters of a Soil and Water Assessment Tool (SWAT) model are selected to be analyzed and screened, using the variance-based Sobol' and moment-independent PAWN methods. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. Moreover, the calculation does not even require additional model evaluations for the Sobol' method. A formal statistical test validates these parameter screening results. Based on the dummy parameter screening, 11 model parameters are identified as influential. Therefore, it can be denoted that the "dummy parameter approach" can facilitate the parameter screening process and provide guidance for GSA users to define a screening-threshold, with only limited additional resources. Key words: Parameter screening, Global sensitivity analysis, Dummy parameter, Variance-based method, Moment-independent method
The effect of uncertainties in distance-based ranking methods for multi-criteria decision making
NASA Astrophysics Data System (ADS)
Jaini, Nor I.; Utyuzhnikov, Sergei V.
2017-08-01
Data in the multi-criteria decision making are often imprecise and changeable. Therefore, it is important to carry out sensitivity analysis test for the multi-criteria decision making problem. The paper aims to present a sensitivity analysis for some ranking techniques based on the distance measures in multi-criteria decision making. Two types of uncertainties are considered for the sensitivity analysis test. The first uncertainty is related to the input data, while the second uncertainty is towards the Decision Maker preferences (weights). The ranking techniques considered in this study are TOPSIS, the relative distance and trade-off ranking methods. TOPSIS and the relative distance method measure a distance from an alternative to the ideal and antiideal solutions. In turn, the trade-off ranking calculates a distance of an alternative to the extreme solutions and other alternatives. Several test cases are considered to study the performance of each ranking technique in both types of uncertainties.
Sensitivity analysis of a ground-water-flow model
Torak, Lynn J.; ,
1991-01-01
A sensitivity analysis was performed on 18 hydrological factors affecting steady-state groundwater flow in the Upper Floridan aquifer near Albany, southwestern Georgia. Computations were based on a calibrated, two-dimensional, finite-element digital model of the stream-aquifer system and the corresponding data inputs. Flow-system sensitivity was analyzed by computing water-level residuals obtained from simulations involving individual changes to each hydrological factor. Hydrological factors to which computed water levels were most sensitive were those that produced the largest change in the sum-of-squares of residuals for the smallest change in factor value. Plots of the sum-of-squares of residuals against multiplier or additive values that effect change in the hydrological factors are used to evaluate the influence of each factor on the simulated flow system. The shapes of these 'sensitivity curves' indicate the importance of each hydrological factor to the flow system. Because the sensitivity analysis can be performed during the preliminary phase of a water-resource investigation, it can be used to identify the types of hydrological data required to accurately characterize the flow system prior to collecting additional data or making management decisions.
NASA Astrophysics Data System (ADS)
Danilov, A. A.; Kramarenko, V. K.; Nikolaev, D. V.; Rudnev, S. G.; Salamatova, V. Yu; Smirnov, A. V.; Vassilevski, Yu V.
2013-04-01
In this work, an adaptive unstructured tetrahedral mesh generation technology is applied for simulation of segmental bioimpedance measurements using high-resolution whole-body model of the Visible Human Project man. Sensitivity field distributions for a conventional tetrapolar, as well as eight- and ten-electrode measurement configurations are obtained. Based on the ten-electrode configuration, we suggest an algorithm for monitoring changes in the upper lung area.
Post-buckling of a pressured biopolymer spherical shell with the mode interaction
NASA Astrophysics Data System (ADS)
Zhang, Lei; Ru, C. Q.
2018-03-01
Imperfection sensitivity is essential for mechanical behaviour of biopolymer shells characterized by high geometric heterogeneity. The present work studies initial post-buckling and imperfection sensitivity of a pressured biopolymer spherical shell based on non-axisymmetric buckling modes and associated mode interaction. Our results indicate that for biopolymer spherical shells with moderate radius-to-thickness ratio (say, less than 30) and smaller effective bending thickness (say, less than 0.2 times average shell thickness), the imperfection sensitivity predicted based on the axisymmetric mode without the mode interaction is close to the present results based on non-axisymmetric modes with the mode interaction with a small (typically, less than 10%) relative errors. However, for biopolymer spherical shells with larger effective bending thickness, the maximum load an imperfect shell can sustain predicted by the present non-axisymmetric analysis can be significantly (typically, around 30%) lower than those predicted based on the axisymmetric mode without the mode interaction. In such cases, a more accurate non-axisymmetric analysis with the mode interaction, as given in the present work, is required for imperfection sensitivity of pressured buckling of biopolymer spherical shells. Finally, the implications of the present study to two specific types of biopolymer spherical shells (viral capsids and ultrasound contrast agents) are discussed.
Efficient sensitivity analysis and optimization of a helicopter rotor
NASA Technical Reports Server (NTRS)
Lim, Joon W.; Chopra, Inderjit
1989-01-01
Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.
SPR based hybrid electro-optic biosensor for β-lactam antibiotics determination in water
NASA Astrophysics Data System (ADS)
Galatus, Ramona; Feier, Bogdan; Cristea, Cecilia; Cennamo, Nunzio; Zeni, Luigi
2017-09-01
The present work aims to provide a hybrid platform capable of complementary and sensitive detection of β-lactam antibiotics, ampicillin in particular. The use of an aptamer specific to ampicillin assures good selectivity and sensitivity for the detection of ampicillin from different matrice. This new approach is dedicated for a portable, remote sensing platform based on low-cost, small size and low-power consumption solution. The simple experimental hybrid platform integrates the results from the D-shape surface plasmon resonance plastic optical fiber (SPR-POF) and from the electrochemical (bio)sensor, for the analysis of ampicillin, delivering sensitive and reliable results. The SPR-POF already used in many previous applications is embedded in a new experimental setup with fluorescent fibers emitters, for broadband wavelength analysis, low-power consumption and low-heating capabilities of the sensing platform.
A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models
NASA Astrophysics Data System (ADS)
Brugnach, M.; Neilson, R.; Bolte, J.
2001-12-01
The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in the output are identified, the causes of its variability can be found. Some of the advantages of this approach are that it reduces the dimensionality of the search space, it facilitates the interpretation of the results and it provides information that allows exploration of uncertainty at the process level, and how it might affect model output. We present an example using the vegetation model BIOME-BGC.
Surrogate models for efficient stability analysis of brake systems
NASA Astrophysics Data System (ADS)
Nechak, Lyes; Gillot, Frédéric; Besset, Sébastien; Sinou, Jean-Jacques
2015-07-01
This study assesses capacities of the global sensitivity analysis combined together with the kriging formalism to be useful in the robust stability analysis of brake systems, which is too costly when performed with the classical complex eigenvalues analysis (CEA) based on finite element models (FEMs). By considering a simplified brake system, the global sensitivity analysis is first shown very helpful for understanding the effects of design parameters on the brake system's stability. This is allowed by the so-called Sobol indices which discriminate design parameters with respect to their influence on the stability. Consequently, only uncertainty of influent parameters is taken into account in the following step, namely, the surrogate modelling based on kriging. The latter is then demonstrated to be an interesting alternative to FEMs since it allowed, with a lower cost, an accurate estimation of the system's proportions of instability corresponding to the influent parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayer, Andrew M.; Hsu, C.; Bettenhausen, Corey
Cases of absorbing aerosols above clouds (AAC), such as smoke or mineral dust, are omitted from most routinely-processed space-based aerosol optical depth (AOD) data products, including those from the Moderate Resolution Imaging Spectroradiometer (MODIS). This study presents a sensitivity analysis and preliminary algorithm to retrieve above-cloud AOD and liquid cloud optical depth (COD) for AAC cases from MODIS or similar
Rahman, Md Musfiqur; Abd El-Aty, A M; Kim, Sung-Woo; Shin, Sung Chul; Shin, Ho-Chul; Shim, Jae-Han
2017-01-01
In pesticide residue analysis, relatively low-sensitivity traditional detectors, such as UV, diode array, electron-capture, flame photometric, and nitrogen-phosphorus detectors, have been used following classical sample preparation (liquid-liquid extraction and open glass column cleanup); however, the extraction method is laborious, time-consuming, and requires large volumes of toxic organic solvents. A quick, easy, cheap, effective, rugged, and safe method was introduced in 2003 and coupled with selective and sensitive mass detectors to overcome the aforementioned drawbacks. Compared to traditional detectors, mass spectrometers are still far more expensive and not available in most modestly equipped laboratories, owing to maintenance and cost-related issues. Even available, traditional detectors are still being used for analysis of residues in agricultural commodities. It is widely known that the quick, easy, cheap, effective, rugged, and safe method is incompatible with conventional detectors owing to matrix complexity and low sensitivity. Therefore, modifications using column/cartridge-based solid-phase extraction instead of dispersive solid-phase extraction for cleanup have been applied in most cases to compensate and enable the adaptation of the extraction method to conventional detectors. In gas chromatography, the matrix enhancement effect of some analytes has been observed, which lowers the limit of detection and, therefore, enables gas chromatography to be compatible with the quick, easy, cheap, effective, rugged, and safe extraction method. For liquid chromatography with a UV detector, a combination of column/cartridge-based solid-phase extraction and dispersive solid-phase extraction was found to reduce the matrix interference and increase the sensitivity. A suitable double-layer column/cartridge-based solid-phase extraction might be the perfect solution, instead of a time-consuming combination of column/cartridge-based solid-phase extraction and dispersive solid-phase extraction. Therefore, replacing dispersive solid-phase extraction with column/cartridge-based solid-phase extraction in the cleanup step can make the quick, easy, cheap, effective, rugged, and safe extraction method compatible with traditional detectors for more sensitive, effective, and green analysis. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls
NASA Astrophysics Data System (ADS)
Guha Ray, A.; Baidya, D. K.
2012-09-01
Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.
NASA Astrophysics Data System (ADS)
Fu, Rongxin; Li, Qi; Zhang, Junqi; Wang, Ruliang; Lin, Xue; Xue, Ning; Su, Ya; Jiang, Kai; Huang, Guoliang
2016-10-01
Label free point mutation detection is particularly momentous in the area of biomedical research and clinical diagnosis since gene mutations naturally occur and bring about highly fatal diseases. In this paper, a label free and high sensitive approach is proposed for point mutation detection based on hyperspectral interferometry. A hybridization strategy is designed to discriminate a single-base substitution with sequence-specific DNA ligase. Double-strand structures will take place only if added oligonucleotides are perfectly paired to the probe sequence. The proposed approach takes full use of the inherent conformation of double-strand DNA molecules on the substrate and a spectrum analysis method is established to point out the sub-nanoscale thickness variation, which benefits to high sensitive mutation detection. The limit of detection reach 4pg/mm2 according to the experimental result. A lung cancer gene point mutation was demonstrated, proving the high selectivity and multiplex analysis capability of the proposed biosensor.
Bounthavong, Mark; Pruitt, Larry D; Smolenski, Derek J; Gahm, Gregory A; Bansal, Aasthaa; Hansen, Ryan N
2018-02-01
Introduction Home-based telebehavioural healthcare improves access to mental health care for patients restricted by travel burden. However, there is limited evidence assessing the economic value of home-based telebehavioural health care compared to in-person care. We sought to compare the economic impact of home-based telebehavioural health care and in-person care for depression among current and former US service members. Methods We performed trial-based cost-minimisation and cost-utility analyses to assess the economic impact of home-based telebehavioural health care versus in-person behavioural care for depression. Our analyses focused on the payer perspective (Department of Defense and Department of Veterans Affairs) at three months. We also performed a scenario analysis where all patients possessed video-conferencing technology that was approved by these agencies. The cost-utility analysis evaluated the impact of different depression categories on the incremental cost-effectiveness ratio. One-way and probabilistic sensitivity analyses were performed to test the robustness of the model assumptions. Results In the base case analysis the total direct cost of home-based telebehavioural health care was higher than in-person care (US$71,974 versus US$20,322). Assuming that patients possessed government-approved video-conferencing technology, home-based telebehavioural health care was less costly compared to in-person care (US$19,177 versus US$20,322). In one-way sensitivity analyses, the proportion of patients possessing personal computers was a major driver of direct costs. In the cost-utility analysis, home-based telebehavioural health care was dominant when patients possessed video-conferencing technology. Results from probabilistic sensitivity analyses did not differ substantially from base case results. Discussion Home-based telebehavioural health care is dependent on the cost of supplying video-conferencing technology to patients but offers the opportunity to increase access to care. Health-care policies centred on implementation of home-based telebehavioural health care should ensure that these technologies are able to be successfully deployed on patients' existing technology.
2010-01-01
Background Alzheimer's Disease (AD) affects a growing proportion of the population each year. Novel therapies on the horizon may slow the progress of AD symptoms and avoid cases altogether. Initiating treatment for the underlying pathology of AD would ideally be based on biomarker screening tools identifying pre-symptomatic individuals. Early-stage modeling provides estimates of potential outcomes and informs policy development. Methods A time-to-event (TTE) simulation provided estimates of screening asymptomatic patients in the general population age ≥55 and treatment impact on the number of patients reaching AD. Patients were followed from AD screen until all-cause death. Baseline sensitivity and specificity were 0.87 and 0.78, with treatment on positive screen. Treatment slowed progression by 50%. Events were scheduled using literature-based age-dependent incidences of AD and death. Results The base case results indicated increased AD free years (AD-FYs) through delays in onset and a reduction of 20 AD cases per 1000 screened individuals. Patients completely avoiding AD accounted for 61% of the incremental AD-FYs gained. Total years of treatment per 1000 screened patients was 2,611. The number-needed-to-screen was 51 and the number-needed-to-treat was 12 to avoid one case of AD. One-way sensitivity analysis indicated that duration of screening sensitivity and rescreen interval impact AD-FYs the most. A two-way sensitivity analysis found that for a test with an extended duration of sensitivity (15 years) the number of AD cases avoided was 6,000-7,000 cases for a test with higher sensitivity and specificity (0.90,0.90). Conclusions This study yielded valuable parameter range estimates at an early stage in the study of screening for AD. Analysis identified duration of screening sensitivity as a key variable that may be unavailable from clinical trials. PMID:20433705
Pai, Madhukar; Kalantri, Shriprakash; Pascopella, Lisa; Riley, Lee W; Reingold, Arthur L
2005-10-01
To summarize, using meta-analysis, the accuracy of bacteriophage-based assays for the detection of rifampicin resistance in Mycobacterium tuberculosis. By searching multiple databases and sources we identified a total of 21 studies eligible for meta-analysis. Of these, 14 studies used phage amplification assays (including eight studies on the commercial FASTPlaque-TB kits), and seven used luciferase reporter phage (LRP) assays. Sensitivity, specificity, and agreement between phage assay and reference standard (e.g. agar proportion method or BACTEC 460) results were the main outcomes of interest. When performed on culture isolates (N=19 studies), phage assays appear to have relatively high sensitivity and specificity. Eleven of 19 (58%) studies reported sensitivity and specificity estimates > or =95%, and 13 of 19 (68%) studies reported > or =95% agreement with reference standard results. Specificity estimates were slightly lower and more variable than sensitivity; 5 of 19 (26%) studies reported specificity <90%. Only two studies performed phage assays directly on sputum specimens; although one study reported sensitivity and specificity of 100 and 99%, respectively, another reported sensitivity of 86% and specificity of 73%. Current evidence is largely restricted to the use of phage assays for the detection of rifampicin resistance in culture isolates. When used on culture isolates, these assays appear to have high sensitivity, but variable and slightly lower specificity. In contrast, evidence is lacking on the accuracy of these assays when they are directly applied to sputum specimens. If phage-based assays can be directly used on clinical specimens and if they are shown to have high accuracy, they have the potential to improve the diagnosis of MDR-TB. However, before phage assays can be successfully used in routine practice, several concerns have to be addressed, including unexplained false positives in some studies, potential for contamination and indeterminate results.
Furiak, Nicolas M; Klein, Robert W; Kahle-Wrobleski, Kristin; Siemers, Eric R; Sarpong, Eric; Klein, Timothy M
2010-04-30
Alzheimer's Disease (AD) affects a growing proportion of the population each year. Novel therapies on the horizon may slow the progress of AD symptoms and avoid cases altogether. Initiating treatment for the underlying pathology of AD would ideally be based on biomarker screening tools identifying pre-symptomatic individuals. Early-stage modeling provides estimates of potential outcomes and informs policy development. A time-to-event (TTE) simulation provided estimates of screening asymptomatic patients in the general population age > or =55 and treatment impact on the number of patients reaching AD. Patients were followed from AD screen until all-cause death. Baseline sensitivity and specificity were 0.87 and 0.78, with treatment on positive screen. Treatment slowed progression by 50%. Events were scheduled using literature-based age-dependent incidences of AD and death. The base case results indicated increased AD free years (AD-FYs) through delays in onset and a reduction of 20 AD cases per 1000 screened individuals. Patients completely avoiding AD accounted for 61% of the incremental AD-FYs gained. Total years of treatment per 1000 screened patients was 2,611. The number-needed-to-screen was 51 and the number-needed-to-treat was 12 to avoid one case of AD. One-way sensitivity analysis indicated that duration of screening sensitivity and rescreen interval impact AD-FYs the most. A two-way sensitivity analysis found that for a test with an extended duration of sensitivity (15 years) the number of AD cases avoided was 6,000-7,000 cases for a test with higher sensitivity and specificity (0.90,0.90). This study yielded valuable parameter range estimates at an early stage in the study of screening for AD. Analysis identified duration of screening sensitivity as a key variable that may be unavailable from clinical trials.
Zhang, Wenming; Zhu, Sha; Bai, Yunping; Xi, Ning; Wang, Shaoyang; Bian, Yang; Li, Xiaowei; Zhang, Yucang
2015-05-20
The temperature/pH dual sensitivity reed hemicellulose-based hydrogels have been prepared through glow discharge electrolysis plasma (GDEP). The effect of different discharge voltages on the temperature and pH response performance of reed hemicellulose-based hydrogels was inspected, and the formation mechanism, deswelling behaviors of reed hemicellulose-based hydrogels were also discussed. At the same time, infrared spectroscopy (FT-IR), scanning differential thermal analysis (DSC) and scanning electron microscope (SEM) were adopted to characterize the structure, phase transformation behaviors and microstructure of hydrogels. It turned out to be that all reed hemicellulose-based hydrogels had a double sensitivity to temperature and pH, and their phase transition temperatures were all approximately 33 °C, as well as the deswelling dynamics met the first model. In addition, the hydrogel (TPRH-3), under discharge voltage 600 V, was more sensitive to temperature and pH and had higher deswelling ratio. Copyright © 2015 Elsevier Ltd. All rights reserved.
Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.
2007-01-01
To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.
Correia, Rodolfo Patussi; Bento, Laiz Cameirão; Bortolucci, Ana Carolina Apelle; Alexandre, Anderson Marega; Vaz, Andressa da Costa; Schimidell, Daniela; Pedro, Eduardo de Carvalho; Perin, Fabricio Simões; Nozawa, Sonia Tsukasa; Mendes, Cláudio Ernesto Albers; Barroso, Rodrigo de Souza; Bacal, Nydia Strachman
2016-01-01
ABSTRACT Objective: To discuss the implementation of technical advances in laboratory diagnosis and monitoring of paroxysmal nocturnal hemoglobinuria for validation of high-sensitivity flow cytometry protocols. Methods: A retrospective study based on analysis of laboratory data from 745 patient samples submitted to flow cytometry for diagnosis and/or monitoring of paroxysmal nocturnal hemoglobinuria. Results: Implementation of technical advances reduced test costs and improved flow cytometry resolution for paroxysmal nocturnal hemoglobinuria clone detection. Conclusion: High-sensitivity flow cytometry allowed more sensitive determination of paroxysmal nocturnal hemoglobinuria clone type and size, particularly in samples with small clones. PMID:27759825
Lucassen, Nicole; Tharner, Anne; Van Ijzendoorn, Marinus H; Bakermans-Kranenburg, Marian J; Volling, Brenda L; Verhulst, Frank C; Lambregtse-Van den Berg, Mijke P; Tiemeier, Henning
2011-12-01
For almost three decades, the association between paternal sensitivity and infant-father attachment security has been studied. The first wave of studies on the correlates of infant-father attachment showed a weak association between paternal sensitivity and infant-father attachment security (r = .13, p < .001, k = 8, N = 546). In the current paper, a meta-analysis of the association between paternal sensitivity and infant-father attachment based on all studies currently available is presented, and the change over time of the association between paternal sensitivity and infant-father attachment is investigated. Studies using an observational measure of paternal interactive behavior with the infant, and the Strange Situation Procedure to observe the attachment relationship were included. Paternal sensitivity is differentiated from paternal sensitivity combined with stimulation in the interaction with the infant. Higher levels of paternal sensitivity were associated with more infant-father attachment security (r = .12, p < .001, k = 16, N = 1,355). Fathers' sensitive play combined with stimulation was not more strongly associated with attachment security than sensitive interactions without stimulation of play. Despite possible changes in paternal role patterns, we did not find stronger associations between paternal sensitivity and infant attachment in more recent years.
Sensitivity analysis of the add-on price estimate for the silicon web growth process
NASA Technical Reports Server (NTRS)
Mokashi, A. R.
1981-01-01
The web growth process, a silicon-sheet technology option, developed for the flat plate solar array (FSA) project, was examined. Base case data for the technical and cost parameters for the technical and commercial readiness phase of the FSA project are projected. The process add on price, using the base case data for cost parameters such as equipment, space, direct labor, materials and utilities, and the production parameters such as growth rate and run length, using a computer program developed specifically to do the sensitivity analysis with improved price estimation are analyzed. Silicon price, sheet thickness and cell efficiency are also discussed.
NASA Astrophysics Data System (ADS)
Yamanari, Masahiro; Miura, Masahiro; Makita, Shuichi; Yatagai, Toyohiko; Yasuno, Yoshiaki
2007-02-01
Birefringence of retinal nerve fiber layer is measured by polarization-sensitive spectral domain optical coherence tomography using the B-scan-oriented polarization modulation method. Birefringence of the optical fiber and the cornea is compensated by Jones matrix based analysis. Three-dimensional phase retardation map around the optic nerve head and en-face phase retardation map of the retinal nerve fiber layer are shown. Unlike scanning laser polarimetry, our system can measure the phase retardation quantitatively without using bow-tie pattern of the birefringence in the macular region, which enables diagnosis of glaucoma even if the patients have macular disease.
Comparison between two methodologies for urban drainage decision aid.
Moura, P M; Baptista, M B; Barraud, S
2006-01-01
The objective of the present work is to compare two methodologies based on multicriteria analysis for the evaluation of stormwater systems. The first methodology was developed in Brazil and is based on performance-cost analysis, the second one is ELECTRE III. Both methodologies were applied to a case study. Sensitivity and robustness analyses were then carried out. These analyses demonstrate that both methodologies have equivalent results, and present low sensitivity and high robustness. These results prove that the Brazilian methodology is consistent and can be used safely in order to select a good solution or a small set of good solutions that could be compared with more detailed methods afterwards.
Quantifying uncertainty and sensitivity in sea ice models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego Blanco, Jorge Rolando; Hunke, Elizabeth Clare; Urban, Nathan Mark
The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.
Kuberský, Petr; Altšmíd, Jakub; Hamáček, Aleš; Nešpůrek, Stanislav; Zmeškal, Oldřich
2015-01-01
A systematic study was carried out to investigate the effect of ionic liquid in solid polymer electrolyte (SPE) and its layer morphology on the characteristics of an electrochemical amperometric nitrogen dioxide sensor. Five different ionic liquids were immobilized into a solid polymer electrolyte and key sensor parameters (sensitivity, response/recovery times, hysteresis and limit of detection) were characterized. The study revealed that the sensor based on 1-ethyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide ([EMIM][N(Tf)2]) showed the best sensitivity, fast response/recovery times, and low sensor response hysteresis. The working electrode, deposited from water-based carbon nanotube ink, was prepared by aerosol-jet printing technology. It was observed that the thermal treatment and crystallinity of poly(vinylidene fluoride) (PVDF) in the solid polymer electrolyte influenced the sensitivity. Picture analysis of the morphology of the SPE layer based on [EMIM][N(Tf)2] ionic liquid treated under different conditions suggests that the sensor sensitivity strongly depends on the fractal dimension of PVDF spherical objects in SPE. Their deformation, e.g., due to crowding, leads to a decrease in sensor sensitivity. PMID:26569248
USEPA EXAMPLE EXIT LEVEL ANALYSIS RESULTS
Developed by NERL/ERD for the Office of Solid Waste, the enclosed product provides an example uncertainty analysis (UA) and initial process-based sensitivity analysis (SA) of hazardous waste "exit" concentrations for 7 chemicals and metals using the 3MRA Version 1.0 Modeling Syst...
NASA Astrophysics Data System (ADS)
Pollard, Thomas B
Recent advances in microbiology, computational capabilities, and microelectromechanical-system fabrication techniques permit modeling, design, and fabrication of low-cost, miniature, sensitive and selective liquid-phase sensors and lab-on-a-chip systems. Such devices are expected to replace expensive, time-consuming, and bulky laboratory-based testing equipment. Potential applications for devices include: fluid characterization for material science and industry; chemical analysis in medicine and pharmacology; study of biological processes; food analysis; chemical kinetics analysis; and environmental monitoring. When combined with liquid-phase packaging, sensors based on surface-acoustic-wave (SAW) technology are considered strong candidates. For this reason such devices are focused on in this work; emphasis placed on device modeling and packaging for liquid-phase operation. Regarding modeling, topics considered include mode excitation efficiency of transducers; mode sensitivity based on guiding structure materials/geometries; and use of new piezoelectric materials. On packaging, topics considered include package interfacing with SAW devices, and minimization of packaging effects on device performance. In this work novel numerical models are theoretically developed and implemented to study propagation and transduction characteristics of sensor designs using wave/constitutive equations, Green's functions, and boundary/finite element methods. Using developed simulation tools that consider finite-thickness of all device electrodes, transduction efficiency for SAW transducers with neighboring uniform or periodic guiding electrodes is reported for the first time. Results indicate finite electrode thickness strongly affects efficiency. Using dense electrodes, efficiency is shown to approach 92% and 100% for uniform and periodic electrode guiding, respectively; yielding improved sensor detection limits. A numerical sensitivity analysis is presented targeting viscosity using uniform-electrode and shear-horizontal mode configurations on potassium-niobate, langasite, and quartz substrates. Optimum configurations are determined yielding maximum sensitivity. Results show mode propagation-loss and sensitivity to viscosity are correlated by a factor independent of substrate material. The analysis is useful for designing devices meeting sensitivity and signal level requirements. A novel, rapid and precise microfluidic chamber alignment/bonding method was developed for SAW platforms. The package is shown to have little effect on device performance and permits simple macrofluidic interfacing. Lastly, prototypes were designed, fabricated, and tested for viscosity and biosensor applications; results show ability to detect as low as 1% glycerol in water and surface-bound DNA crosslinking.
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A; Gombos, Eva
2014-08-01
To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast-enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise, and fitting algorithms. We modeled the underlying dynamics of the tumor by an LDS and used the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist's segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared with the radiologist's segmentation and 82.1% accuracy and 100% sensitivity when compared with the CADstream output. The overlap of the algorithm output with the radiologist's segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72, respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC = 0.95. The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. © 2013 Wiley Periodicals, Inc.
Automatic Segmentation of Invasive Breast Carcinomas from DCE-MRI using Time Series Analysis
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A.; Gombos, Eva
2013-01-01
Purpose Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise and fitting algorithms. To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Methods We modeled the underlying dynamics of the tumor by a LDS and use the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist’s segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). Results The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared to the radiologist’s segmentation and 82.1% accuracy and 100% sensitivity when compared to the CADstream output. The overlap of the algorithm output with the radiologist’s segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72 respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC=0.95. Conclusion The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. PMID:24115175
Hierarchical Nanogold Labels to Improve the Sensitivity of Lateral Flow Immunoassay
NASA Astrophysics Data System (ADS)
Serebrennikova, Kseniya; Samsonova, Jeanne; Osipov, Alexander
2018-06-01
Lateral flow immunoassay (LFIA) is a widely used express method and offers advantages such as a short analysis time, simplicity of testing and result evaluation. However, an LFIA based on gold nanospheres lacks the desired sensitivity, thereby limiting its wide applications. In this study, spherical nanogold labels along with new types of nanogold labels such as gold nanopopcorns and nanostars were prepared, characterized, and applied for LFIA of model protein antigen procalcitonin. It was found that the label with a structure close to spherical provided more uniform distribution of specific antibodies on its surface, indicative of its suitability for this type of analysis. LFIA using gold nanopopcorns as a label allowed procalcitonin detection over a linear range of 0.5-10 ng mL-1 with the limit of detection of 0.1 ng mL-1, which was fivefold higher than the sensitivity of the assay with gold nanospheres. Another approach to improve the sensitivity of the assay included the silver enhancement method, which was used to compare the amplification of LFIA for procalcitonin detection. The sensitivity of procalcitonin determination by this method was 10 times better the sensitivity of the conventional LFIA with gold nanosphere as a label. The proposed approach of LFIA based on gold nanopopcorns improved the detection sensitivity without additional steps and prevented the increased consumption of specific reagents (antibodies).
NASA Astrophysics Data System (ADS)
Zhao, Jianhua; Zeng, Haishan; Kalia, Sunil; Lui, Harvey
2017-02-01
Background: Raman spectroscopy is a non-invasive optical technique which can measure molecular vibrational modes within tissue. A large-scale clinical study (n = 518) has demonstrated that real-time Raman spectroscopy could distinguish malignant from benign skin lesions with good diagnostic accuracy; this was validated by a follow-up independent study (n = 127). Objective: Most of the previous diagnostic algorithms have typically been based on analyzing the full band of the Raman spectra, either in the fingerprint or high wavenumber regions. Our objective in this presentation is to explore wavenumber selection based analysis in Raman spectroscopy for skin cancer diagnosis. Methods: A wavenumber selection algorithm was implemented using variably-sized wavenumber windows, which were determined by the correlation coefficient between wavenumbers. Wavenumber windows were chosen based on accumulated frequency from leave-one-out cross-validated stepwise regression or least and shrinkage selection operator (LASSO). The diagnostic algorithms were then generated from the selected wavenumber windows using multivariate statistical analyses, including principal component and general discriminant analysis (PC-GDA) and partial least squares (PLS). A total cohort of 645 confirmed lesions from 573 patients encompassing skin cancers, precancers and benign skin lesions were included. Lesion measurements were divided into training cohort (n = 518) and testing cohort (n = 127) according to the measurement time. Result: The area under the receiver operating characteristic curve (ROC) improved from 0.861-0.891 to 0.891-0.911 and the diagnostic specificity for sensitivity levels of 0.99-0.90 increased respectively from 0.17-0.65 to 0.20-0.75 by selecting specific wavenumber windows for analysis. Conclusion: Wavenumber selection based analysis in Raman spectroscopy improves skin cancer diagnostic specificity at high sensitivity levels.
Accuracy analysis and design of A3 parallel spindle head
NASA Astrophysics Data System (ADS)
Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan
2016-03-01
As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.
Kovalska, M P; Bürki, E; Schoetzau, A; Orguel, S F; Orguel, S; Grieshaber, M C
2011-04-01
The distinction of real progression from test variability in visual field (VF) series may be based on clinical judgment, on trend analysis based on follow-up of test parameters over time, or on identification of a significant change related to the mean of baseline exams (event analysis). The aim of this study was to compare a new population-based method (Octopus field analysis, OFA) with classic regression analyses and clinical judgment for detecting glaucomatous VF changes. 240 VF series of 240 patients with at least 9 consecutive examinations available were included into this study. They were independently classified by two experienced investigators. The results of such a classification served as a reference for comparison for the following statistical tests: (a) t-test global, (b) r-test global, (c) regression analysis of 10 VF clusters and (d) point-wise linear regression analysis. 32.5 % of the VF series were classified as progressive by the investigators. The sensitivity and specificity were 89.7 % and 92.0 % for r-test, and 73.1 % and 93.8 % for the t-test, respectively. In the point-wise linear regression analysis, the specificity was comparable (89.5 % versus 92 %), but the sensitivity was clearly lower than in the r-test (22.4 % versus 89.7 %) at a significance level of p = 0.01. A regression analysis for the 10 VF clusters showed a markedly higher sensitivity for the r-test (37.7 %) than the t-test (14.1 %) at a similar specificity (88.3 % versus 93.8 %) for a significant trend (p = 0.005). In regard to the cluster distribution, the paracentral clusters and the superior nasal hemifield progressed most frequently. The population-based regression analysis seems to be superior to the trend analysis in detecting VF progression in glaucoma, and may eliminate the drawbacks of the event analysis. Further, it may assist the clinician in the evaluation of VF series and may allow better visualization of the correlation between function and structure owing to VF clusters. © Georg Thieme Verlag KG Stuttgart · New York.
Study of node and mass sensitivity of resonant mode based cantilevers with concentrated mass loading
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Kewei, E-mail: drzkw@126.com; Chai, Yuesheng; Fu, Jiahui
2015-12-15
Resonant-mode based cantilevers are an important type of acoustic wave based mass-sensing devices. In this work, the governing vibration equation of a bi-layer resonant-mode based cantilever attached with concentrated mass is established by using a modal analysis method. The effects of resonance modes and mass loading conditions on nodes and mass sensitivity of the cantilever were theoretically studied. The results suggested that the node did not shift when concentrated mass was loaded on a specific position. Mass sensitivity of the cantilever was linearly proportional to the square of the point displacement at the mass loading position for all the resonancemore » modes. For the first resonance mode, when mass loading position x{sub c} satisfied 0 < x{sub c} < ∼ 0.3l (l is the cantilever beam length and 0 represents the rigid end), mass sensitivity decreased as the mass increasing while the opposite trend was obtained when mass loading satisfied ∼0.3l ≤ x{sub c} ≤ l. Mass sensitivity did not change when concentrated mass was loaded at the rigid end. This work can provide scientific guidance to optimize the mass sensitivity of a resonant-mode based cantilever.« less
Polarization sensitive spectroscopic optical coherence tomography for multimodal imaging
NASA Astrophysics Data System (ADS)
Strąkowski, Marcin R.; Kraszewski, Maciej; Strąkowska, Paulina; Trojanowski, Michał
2015-03-01
Optical coherence tomography (OCT) is a non-invasive method for 3D and cross-sectional imaging of biological and non-biological objects. The OCT measurements are provided in non-contact and absolutely safe way for the tested sample. Nowadays, the OCT is widely applied in medical diagnosis especially in ophthalmology, as well as dermatology, oncology and many more. Despite of great progress in OCT measurements there are still a vast number of issues like tissue recognition or imaging contrast enhancement that have not been solved yet. Here we are going to present the polarization sensitive spectroscopic OCT system (PS-SOCT). The PS-SOCT combines the polarization sensitive analysis with time-frequency analysis. Unlike standard polarization sensitive OCT the PS-SOCT delivers spectral information about measured quantities e.g. tested object birefringence changes over the light spectra. This solution overcomes the limits of polarization sensitive analysis applied in standard PS-OCT. Based on spectral data obtained from PS-SOCT the exact value of birefringence can be calculated even for the objects that provide higher order of retardation. In this contribution the benefits of using the combination of time-frequency and polarization sensitive analysis are being expressed. Moreover, the PS-SOCT system features, as well as OCT measurement examples are presented.
NASA Astrophysics Data System (ADS)
Döpking, Sandra; Plaisance, Craig P.; Strobusch, Daniel; Reuter, Karsten; Scheurer, Christoph; Matera, Sebastian
2018-01-01
In the last decade, first-principles-based microkinetic modeling has been developed into an important tool for a mechanistic understanding of heterogeneous catalysis. A commonly known, but hitherto barely analyzed issue in this kind of modeling is the presence of sizable errors from the use of approximate Density Functional Theory (DFT). We here address the propagation of these errors to the catalytic turnover frequency (TOF) by global sensitivity and uncertainty analysis. Both analyses require the numerical quadrature of high-dimensional integrals. To achieve this efficiently, we utilize and extend an adaptive sparse grid approach and exploit the confinement of the strongly non-linear behavior of the TOF to local regions of the parameter space. We demonstrate the methodology on a model of the oxygen evolution reaction at the Co3O4 (110)-A surface, using a maximum entropy error model that imposes nothing but reasonable bounds on the errors. For this setting, the DFT errors lead to an absolute uncertainty of several orders of magnitude in the TOF. We nevertheless find that it is still possible to draw conclusions from such uncertain models about the atomistic aspects controlling the reactivity. A comparison with derivative-based local sensitivity analysis instead reveals that this more established approach provides incomplete information. Since the adaptive sparse grids allow for the evaluation of the integrals with only a modest number of function evaluations, this approach opens the way for a global sensitivity analysis of more complex models, for instance, models based on kinetic Monte Carlo simulations.
Hunter versus CIE color measurement systems for analysis of milk-based beverages.
Cheng, Ni; Barbano, David M; Drake, Mary Anne
2018-06-01
The objective of our work was to determine the differences in sensitivity of Hunter and International Commission on Illumination (CIE) methods at 2 different viewer angles (2 and 10°) for measurement of whiteness, red/green, and blue/yellow color of milk-based beverages over a range of composition. Sixty combinations of milk-based beverages were formulated (2 replicates) with a range of fat level from 0.2 to 2%, true protein level from 3 to 5%, and casein as a percent of true protein from 5 to 80% to provide a wide range of milk-based beverage color. In addition, commercial skim, 1 and 2% fat high-temperature, short-time pasteurized fluid milks were analyzed. All beverage formulations were HTST pasteurized and cooled to 4°C before analysis. Color measurement viewer angle (2 vs. 10°) had very little effect on objective color measures of milk-based beverages with a wide range of composition for either the Hunter or CIE color measurement system. Temperature (4, 20, and 50°C) of color measurement had a large effect on the results of color measurement in both the Hunter and CIE measurement systems. The effect of milk beverage temperature on color measurement results was the largest for skim milk and the least for 2% fat milk. This highlights the need for proper control of beverage serving temperature for sensory panel analysis of milk-based beverages with very low fat content and for control of milk temperature when doing objective color analysis for quality control in manufacture of milk-based beverages. The Hunter system of color measurement was more sensitive to differences in whiteness among milk-based beverages than the CIE system, whereas the CIE system was much more sensitive to differences in yellowness among milk-based beverages. There was little difference between the Hunter and CIE system in sensitivity to green/red color of milk-based beverages. In defining milk-based beverage product specifications for objective color measures for dairy product manufacturers, the viewer angle, color measurement system (CIE vs. Hunter), and sample measurement temperature should be specified along with type of illuminant. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Temperature Compensation Fiber Bragg Grating Pressure Sensor Based on Plane Diaphragm
NASA Astrophysics Data System (ADS)
Liang, Minfu; Fang, Xinqiu; Ning, Yaosheng
2018-06-01
Pressure sensors are the essential equipments in the field of pressure measurement. In this work, we propose a temperature compensation fiber Bragg grating (FBG) pressure sensor based on the plane diaphragm. The plane diaphragm and pressure sensitivity FBG (PS FBG) are used as the pressure sensitive components, and the temperature compensation FBG (TC FBG) is used to improve the temperature cross-sensitivity. Mechanical deformation model and deformation characteristics simulation analysis of the diaphragm are presented. The measurement principle and theoretical analysis of the mathematical relationship between the FBG central wavelength shift and pressure of the sensor are introduced. The sensitivity and measure range can be adjusted by utilizing the different materials and sizes of the diaphragm to accommodate different measure environments. The performance experiments are carried out, and the results indicate that the pressure sensitivity of the sensor is 35.7 pm/MPa in a range from 0 MPa to 50 MPa and has good linearity with a linear fitting correlation coefficient of 99.95%. In addition, the sensor has the advantages of low frequency chirp and high stability, which can be used to measure pressure in mining engineering, civil engineering, or other complex environment.
Sensitivity and Uncertainty Analysis of the GFR MOX Fuel Subassembly
NASA Astrophysics Data System (ADS)
Lüley, J.; Vrban, B.; Čerba, Š.; Haščík, J.; Nečas, V.; Pelloni, S.
2014-04-01
We performed sensitivity and uncertainty analysis as well as benchmark similarity assessment of the MOX fuel subassembly designed for the Gas-Cooled Fast Reactor (GFR) as a representative material of the core. Material composition was defined for each assembly ring separately allowing us to decompose the sensitivities not only for isotopes and reactions but also for spatial regions. This approach was confirmed by direct perturbation calculations for chosen materials and isotopes. Similarity assessment identified only ten partly comparable benchmark experiments that can be utilized in the field of GFR development. Based on the determined uncertainties, we also identified main contributors to the calculation bias.
NASA Astrophysics Data System (ADS)
Chen, Shichao; Zhu, Yizheng
2017-02-01
Sensitivity is a critical index to measure the temporal fluctuation of the retrieved optical pathlength in quantitative phase imaging system. However, an accurate and comprehensive analysis for sensitivity evaluation is still lacking in current literature. In particular, previous theoretical studies for fundamental sensitivity based on Gaussian noise models are not applicable to modern cameras and detectors, which are dominated by shot noise. In this paper, we derive two shot noiselimited theoretical sensitivities, Cramér-Rao bound and algorithmic sensitivity for wavelength shifting interferometry, which is a major category of on-axis interferometry techniques in quantitative phase imaging. Based on the derivations, we show that the shot noise-limited model permits accurate estimation of theoretical sensitivities directly from measured data. These results can provide important insights into fundamental constraints in system performance and can be used to guide system design and optimization. The same concepts can be generalized to other quantitative phase imaging techniques as well.
Jia, Yongliang; Zhang, Shikai; Huang, Fangyi; Leung, Siu-wai
2012-06-01
Ginseng-based medicines and nitrates are commonly used in treating ischemic heart disease (IHD) angina pectoris in China. Hundreds of randomized controlled trials (RCTs) reported in Chinese language claimed that ginseng-based medicines can relieve the symptoms of IHD. This study provides the first PRISMA-compliant systematic review with sensitivity and subgroup analyses to evaluate the RCTs comparing the efficacies of ginseng-based medicines and nitrates in treating ischemic heart disease, particularly angina pectoris. Past RCTs published up to 2010 on ginseng versus nitrates in treating IHD for 14 or more days were retrieved from major English and Chinese databases, including PubMed, Science Direct, Cochrane Library, WangFang Data, and Chinese National Knowledge Infrastructure. The qualities of included RCTs were assessed with Jadad scale, a refined Jadad scale called M scale, CONSORT 2010 checklist, and Cochrane risk of bias tool. Meta-analysis was performed on the primary outcomes including the improvement of symptoms and electrocardiography (ECG). Subgroup analysis, sensitivity analysis, and meta-regression were performed to evaluate the effects of study characteristics of RCTs, including quality, follow-up periods, and efficacy definitions on the overall effect size of ginseng. Eighteen RCTs with 1549 participants were included. Overall odds ratios for comparing ginseng-based medicines with nitrates were 3.00 (95% CI: 2.27-3.96) in symptom improvement (n=18) and 1.61 (95% CI: 1.20-2.15) in ECG improvement (n=10). Subgroup analysis, sensitivity analysis, and meta-regression found no significant difference in overall effects among all study characteristics, indicating that the overall effects were stable. The meta-analysis of 18 eligible RCTs demonstrates moderate evidence that ginseng is more effective than nitrates for treating angina pectoris. However, further RCTs for higher quality, longer follow-up periods, lager sample size, multi-center/country, and are still required to verify the efficacy. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.
Design and simulation analysis of a novel pressure sensor based on graphene film
NASA Astrophysics Data System (ADS)
Nie, M.; Xia, Y. H.; Guo, A. Q.
2018-02-01
A novel pressure sensor structure based on graphene film as the sensitive membrane was proposed in this paper, which solved the problem to measure low and minor pressure with high sensitivity. Moreover, the fabrication process was designed which can be compatible with CMOS IC fabrication technology. Finite element analysis has been used to simulate the displacement distribution of the thin movable graphene film of the designed pressure sensor under the different pressures with different dimensions. From the simulation results, the optimized structure has been obtained which can be applied in the low measurement range from 10hPa to 60hPa. The length and thickness of the graphene film could be designed as 100μm and 0.2μm, respectively. The maximum mechanical stress on the edge of the sensitive membrane was 1.84kPa, which was far below the breaking strength of the silicon nitride and graphene film.
NASA Technical Reports Server (NTRS)
Lockwood, H. E.
1975-01-01
A color film with a sensitivity and color balance equal to SO-368, Kodak MS Ektachrome (Estar thin base) was required for use on the Apollo-Soyuz test project (ASTP). A Wratten 2A filter was required for use with the film to reduce short wavelength effects which frequently produce a blue color balance in aerial photographs. The background regarding a special emulsion which was produced with a 2A filter equivalent as an integral part of an SO-368 film manufactured by Eastman Kodak, the cost for production of the special film, and the results of a series of tests made within PTD to certify the film for ASTP use are documented. The tests conducted and documented were physical inspection, process compatibility, effective sensitivity, color balance, cross section analysis, resolution, spectral sensitivity, consistency of results, and picture sample analysis.
Li, Siying; Koch, Gary G; Preisser, John S; Lam, Diana; Sanchez-Kam, Matilde
2017-01-01
Dichotomous endpoints in clinical trials have only two possible outcomes, either directly or via categorization of an ordinal or continuous observation. It is common to have missing data for one or more visits during a multi-visit study. This paper presents a closed form method for sensitivity analysis of a randomized multi-visit clinical trial that possibly has missing not at random (MNAR) dichotomous data. Counts of missing data are redistributed to the favorable and unfavorable outcomes mathematically to address possibly informative missing data. Adjusted proportion estimates and their closed form covariance matrix estimates are provided. Treatment comparisons over time are addressed with Mantel-Haenszel adjustment for a stratification factor and/or randomization-based adjustment for baseline covariables. The application of such sensitivity analyses is illustrated with an example. An appendix outlines an extension of the methodology to ordinal endpoints.
Wang, Heye; Dou, Peng; Lü, Chenchen; Liu, Zhen
2012-07-13
Erythropoietin (EPO) is an important glycoprotein hormone. Recombinant human EPO (rhEPO) is an important therapeutic drug and can be also used as doping reagent in sports. The analysis of EPO glycoforms in pharmaceutical and sports areas greatly challenges analytical scientists from several aspects, among which sensitive detection and effective and facile sample preparation are two essential issues. Herein, we investigated new possibilities for these two aspects. Deep UV laser-induced fluorescence detection (deep UV-LIF) was established to detect the intrinsic fluorescence of EPO while an immuno-magnetic beads-based extraction (IMBE) was developed to specifically extract EPO glycoforms. Combined with capillary zone electrophoresis (CZE), CZE-deep UV-LIF allows high resolution glycoform profiling with improved sensitivity. The detection sensitivity was improved by one order of magnitude as compared with UV absorbance detection. An additional advantage is that the original glycoform distribution can be completely preserved because no fluorescent labeling is needed. By combining IMBE with CZE-deep UV-LIF, the overall detection sensitivity was 1.5 × 10⁻⁸ mol/L, which was enhanced by two orders of magnitude relative to conventional CZE with UV absorbance detection. It is applicable to the analysis of pharmaceutical preparations of EPO, but the sensitivity is insufficient for the anti-doping analysis of EPO in blood and urine. IMBE can be straightforward and effective approach for sample preparation. However, antibodies with high specificity were the key for application to urine samples because some urinary proteins can severely interfere the immuno-extraction. Copyright © 2012 Elsevier B.V. All rights reserved.
Image encryption based on a delayed fractional-order chaotic logistic system
NASA Astrophysics Data System (ADS)
Wang, Zhen; Huang, Xia; Li, Ning; Song, Xiao-Na
2012-05-01
A new image encryption scheme is proposed based on a delayed fractional-order chaotic logistic system. In the process of generating a key stream, the time-varying delay and fractional derivative are embedded in the proposed scheme to improve the security. Such a scheme is described in detail with security analyses including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. Experimental results show that the newly proposed image encryption scheme possesses high security.
Design sensitivity analysis of rotorcraft airframe structures for vibration reduction
NASA Technical Reports Server (NTRS)
Murthy, T. Sreekanta
1987-01-01
Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.
Fafin-Lefevre, Mélanie; Morlais, Fabrice; Guittet, Lydia; Clin, Bénédicte; Launoy, Guy; Galateau-Sallé, Françoise; Plancoulaine, Benoît; Herlin, Paulette; Letourneux, Marc
2011-08-01
To identify which morphologic or densitometric parameters are modified in cell nuclei from bronchopulmonary cancer based on 18 parameters involving shape, intensity, chromatin, texture, and DNA content and develop a bronchopulmonary cancer screening method relying on analysis of sputum sample cell nuclei. A total of 25 sputum samples from controls and 22 bronchial aspiration samples from patients presenting with bronchopulmonary cancer who were professionally exposed to cancer were used. After Feulgen staining, 18 morphologic and DNA content parameters were measured on cell nuclei, via image cytom- etry. A method was developed for analyzing distribution quantiles, compared with simply interpreting mean values, to characterize morphologic modifications in cell nuclei. Distribution analysis of parameters enabled us to distinguish 13 of 18 parameters that demonstrated significant differences between controls and cancer cases. These parameters, used alone, enabled us to distinguish two population types, with both sensitivity and specificity > 70%. Three parameters offered 100% sensitivity and specificity. When mean values offered high sensitivity and specificity, comparable or higher sensitivity and specificity values were observed for at least one of the corresponding quantiles. Analysis of modification in morphologic parameters via distribution analysis proved promising for screening bronchopulmonary cancer from sputum.
Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat
2016-01-01
The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.
Eigenvalue sensitivity analysis of planar frames with variable joint and support locations
NASA Technical Reports Server (NTRS)
Chuang, Ching H.; Hou, Gene J. W.
1991-01-01
Two sensitivity equations are derived in this study based upon the continuum approach for eigenvalue sensitivity analysis of planar frame structures with variable joint and support locations. A variational form of an eigenvalue equation is first derived in which all of the quantities are expressed in the local coordinate system attached to each member. Material derivative of this variational equation is then sought to account for changes in member's length and orientation resulting form the perturbation of joint and support locations. Finally, eigenvalue sensitivity equations are formulated in either domain quantities (by the domain method) or boundary quantities (by the boundary method). It is concluded that the sensitivity equation derived by the boundary method is more efficient in computation but less accurate than that of the domain method. Nevertheless, both of them in terms of computational efficiency are superior to the conventional direct differentiation method and the finite difference method.
Stochastic sensitivity measure for mistuned high-performance turbines
NASA Technical Reports Server (NTRS)
Murthy, Durbha V.; Pierre, Christophe
1992-01-01
A stochastic measure of sensitivity is developed in order to predict the effects of small random blade mistuning on the dynamic aeroelastic response of turbomachinery blade assemblies. This sensitivity measure is based solely on the nominal system design (i.e., on tuned system information), which makes it extremely easy and inexpensive to calculate. The measure has the potential to become a valuable design tool that will enable designers to evaluate mistuning effects at a preliminary design stage and thus assess the need for a full mistuned rotor analysis. The predictive capability of the sensitivity measure is illustrated by examining the effects of mistuning on the aeroelastic modes of the first stage of the oxidizer turbopump in the Space Shuttle Main Engine. Results from a full analysis mistuned systems confirm that the simple stochastic sensitivity measure predicts consistently the drastic changes due to misturning and the localization of aeroelastic vibration to a few blades.
Specialized Binary Analysis for Vetting Android APPS Using GUI Logic
2016-04-01
the use of high- level reasoning based on the GUI design logic of an app to enable a security analyst to diagnose and triage the potentially sensitive...execution paths of an app. Levels of Inconsistency We have identified three- levels of logical inconsistencies: Event- level inconsistency A sensitive...operation (e.g., taking a picture) is not trigged by user action on a GUI component. Layout- level inconsistency A sensitive operation is triggered by
A Non-Intrusive Algorithm for Sensitivity Analysis of Chaotic Flow Simulations
NASA Technical Reports Server (NTRS)
Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris
2017-01-01
We demonstrate a novel algorithm for computing the sensitivity of statistics in chaotic flow simulations to parameter perturbations. The algorithm is non-intrusive but requires exposing an interface. Based on the principle of shadowing in dynamical systems, this algorithm is designed to reduce the effect of the sampling error in computing sensitivity of statistics in chaotic simulations. We compare the effectiveness of this method to that of the conventional finite difference method.
2010-08-01
a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. a ...SECURITY CLASSIFICATION OF: This study presents a methodology for computing stochastic sensitivities with respect to the design variables, which are the...Random Variables Report Title ABSTRACT This study presents a methodology for computing stochastic sensitivities with respect to the design variables
Space transportation architecture: Reliability sensitivities
NASA Technical Reports Server (NTRS)
Williams, A. M.
1992-01-01
A sensitivity analysis is given of the benefits and drawbacks associated with a proposed Earth to orbit vehicle architecture. The architecture represents a fleet of six vehicles (two existing, four proposed) that would be responsible for performing various missions as mandated by NASA and the U.S. Air Force. Each vehicle has a prescribed flight rate per year for a period of 31 years. By exposing this fleet of vehicles to a probabilistic environment where the fleet experiences failures, downtimes, setbacks, etc., the analysis involves determining the resiliency and costs associated with the fleet of specific vehicle/subsystem reliabilities. The resources required were actual observed data on the failures and downtimes associated with existing vehicles, data based on engineering judgement for proposed vehicles, and the development of a sensitivity analysis program.
NASA Astrophysics Data System (ADS)
Starodub, N. F.; Ogorodniichuk, J.; Lebedeva, T.; Shpylovyy, P.
2013-11-01
In this work we have designed high-specific biosensors for Salmonella typhimurium detection based on the surface plasmon resonance (SPR) and total internal reflection ellipsometry (TIRE). It has been demonstrated high selectivity and sensitivity of analysis. As a registering part for our experiments the Spreeta (USA) and "Plasmonotest" (Ukraine) with flowing cell have been applied among of SPR device. Previous researches confirmed an efficiency of SPR biosensors using for detecting of specific antigen-antibody interactions therefore this type of reactions with some previous preparations of surface binding layer was used as reactive part. It has been defined that in case with Spreeta sensitivity was on the level 103 - 107 cells/ml. Another biosensor based on the SPR has shown the sensitivity within 101 - 106 cells/ml. Maximal sensitivity was on the level of several cells in 10 ml (up to the fact that less than 5 cells) which has been obtained using the biosensor based on TIRE.
Jarujamrus, Purim; Meelapsom, Rattapol; Pencharee, Somkid; Obma, Apinya; Amatatongchai, Maliwan; Ditcharoen, Nadh; Chairam, Sanoe; Tamuang, Suparb
2018-01-01
A smartphone application, called CAnal, was developed as a colorimetric analyzer in paper-based devices for sensitive and selective determination of mercury(II) in water samples. Measurement on the double layer of a microfluidic paper-based analytical device (μPAD) fabricated by alkyl ketene dimer (AKD)-inkjet printing technique with special design doped with unmodified silver nanoparticles (AgNPs) onto the detection zones was performed by monitoring the gray intensity in the blue channel of AgNPs, which disintegrated when exposed to mercury(II) on μPAD. Under the optimized conditions, the developed approach showed high sensitivity, low limit of detection (0.003 mg L -1 , 3SD blank/slope of the calibration curve), small sample volume uptake (two times of 2 μL), and short analysis time. The linearity range of this technique ranged from 0.01 to 10 mg L -1 (r 2 = 0.993). Furthermore, practical analysis of various water samples was also demonstrated to have acceptable performance that was in agreement with the data from cold vapor atomic absorption spectrophotometry (CV-AAS), a conventional method. The proposed technique allows for a rapid, simple (instant report of the final mercury(II) concentration in water samples via smartphone display), sensitive, selective, and on-site analysis with high sample throughput (48 samples h -1 , n = 3) of trace mercury(II) in water samples, which is suitable for end users who are unskilled in analyzing mercury(II) in water samples.
Maas, Miriam; van Roon, Annika; Dam-Deisz, Cecile; Opsteegh, Marieke; Massolo, Alessandro; Deksne, Gunita; Teunis, Peter; van der Giessen, Joke
2016-10-30
A new method, based on a magnetic capture based DNA extraction followed by qPCR, was developed for the detection of the zoonotic parasite Echinococcus multilocularis in definitive hosts. Latent class analysis was used to compare this new method with the currently used phenol-chloroform DNA extraction followed by single tube nested PCR. In total, 60 red foxes and coyotes from three different locations were tested with both molecular methods and the sedimentation and counting technique (SCT) or intestinal scraping technique (IST). Though based on a limited number of samples, it could be established that the magnetic capture based DNA extraction followed by qPCR showed similar sensitivity and specificity as the currently used phenol-chloroform DNA extraction followed by single tube nested PCR. All methods have a high specificity as shown by Bayesian latent class analysis. Both molecular assays have higher sensitivities than the combined SCT and IST, though the uncertainties in sensitivity estimates were wide for all assays tested. The magnetic capture based DNA extraction followed by qPCR has the advantage of not requiring hazardous chemicals like the phenol-chloroform DNA extraction followed by single tube nested PCR. This supports the replacement of the phenol-chloroform DNA extraction followed by single tube nested PCR by the magnetic capture based DNA extraction followed by qPCR for molecular detection of E. multilocularis in definitive hosts. Copyright © 2016 Elsevier B.V. All rights reserved.
Preoperative identification of a suspicious adnexal mass: a systematic review and meta-analysis.
Dodge, Jason E; Covens, Allan L; Lacchetti, Christina; Elit, Laurie M; Le, Tien; Devries-Aboud, Michaela; Fung-Kee-Fung, Michael
2012-07-01
To systematically review the existing literature in order to determine the optimal strategy for preoperative identification of the adnexal mass suspicious for ovarian cancer. A review of all systematic reviews and guidelines published between 1999 and 2009 was conducted as a first step. After the identification of a 2004 AHRQ systematic review on the topic, searches of MEDLINE for studies published since 2004 was also conducted to update and supplement the evidentiary base. A bivariate, random-effects meta-regression model was used to produce summary estimates of sensitivity and specificity and to plot summary ROC curves with 95% confidence regions. Four meta-analyses and 53 primary studies were included in this review. The diagnostic performance of each technology was compared and contrasted based on the summary data on sensitivity and specificity obtained from the meta-analysis. Results suggest that 3D ultrasonography has both a higher sensitivity and specificity when compared to 2D ultrasound. Established morphological scoring systems also performed with respectable sensitivity and specificity, each with equivalent diagnostic competence. Explicit scoring systems did not perform as well as other diagnostic testing methods. Assessment of an adnexal mass by colour Doppler technology was neither as sensitive nor as specific as simple ultrasonography. Of the three imaging modalities considered, MRI appeared to perform the best, although results were not statistically different from CT. PET did not perform as well as either MRI or CT. The measurement of the CA-125 tumour marker appears to be less reliable than do other available assessment methods. The best available evidence was collected and included in this rigorous systematic review and meta-analysis. The abundant evidentiary base provided the context and direction for the diagnosis of early-staged ovarian cancer. Copyright © 2012 Elsevier Inc. All rights reserved.
Gray, Ewan; Donten, Anna; Karssemeijer, Nico; van Gils, Carla; Evans, D Gareth; Astley, Sue; Payne, Katherine
2017-09-01
To identify the incremental costs and consequences of stratified national breast screening programs (stratified NBSPs) and drivers of relative cost-effectiveness. A decision-analytic model (discrete event simulation) was conceptualized to represent four stratified NBSPs (risk 1, risk 2, masking [supplemental screening for women with higher breast density], and masking and risk 1) compared with the current UK NBSP and no screening. The model assumed a lifetime horizon, the health service perspective to identify costs (£, 2015), and measured consequences in quality-adjusted life-years (QALYs). Multiple data sources were used: systematic reviews of effectiveness and utility, published studies reporting costs, and cohort studies embedded in existing NBSPs. Model parameter uncertainty was assessed using probabilistic sensitivity analysis and one-way sensitivity analysis. The base-case analysis, supported by probabilistic sensitivity analysis, suggested that the risk stratified NBSPs (risk 1 and risk-2) were relatively cost-effective when compared with the current UK NBSP, with incremental cost-effectiveness ratios of £16,689 per QALY and £23,924 per QALY, respectively. Stratified NBSP including masking approaches (supplemental screening for women with higher breast density) was not a cost-effective alternative, with incremental cost-effectiveness ratios of £212,947 per QALY (masking) and £75,254 per QALY (risk 1 and masking). When compared with no screening, all stratified NBSPs could be considered cost-effective. Key drivers of cost-effectiveness were discount rate, natural history model parameters, mammographic sensitivity, and biopsy rates for recalled cases. A key assumption was that the risk model used in the stratification process was perfectly calibrated to the population. This early model-based cost-effectiveness analysis provides indicative evidence for decision makers to understand the key drivers of costs and QALYs for exemplar stratified NBSP. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Comparison of the efficiency control of mycotoxins by some optical immune biosensors
NASA Astrophysics Data System (ADS)
Slyshyk, N. F.; Starodub, N. F.
2013-11-01
It was compared the efficiency of patulin control at the application of such optical biosensors which were based on the surface plasmon resonance (SPR) and nano-porous silicon (sNPS). In last case the intensity of the immune reaction was registered by measuring level of chemiluminescence (ChL) or photocurrent of nPS. The sensitivity of this mycotoxin determination by first type of immune biosensor was 0.05-10 mg/L Approximately the same sensitivity as well as the overall time analysis were demonstrated by the immune biosensor based on the nPS too. Nevertheless, the last type of biosensor was simpler in technical aspect and the cost of analysis was cheapest. That is why, it was recommend the nPS based immune biosensor for wide screening application and SPR one for some additional control or verification of preliminary obtained results. In this article a special attention was given to condition of sample preparation for analysis, in particular, micotoxin extraction from potao and some juices. Moreover, it was compared the efficiency of the above mentioned immune biosensors with such traditional approach of mycotoxin determination as the ELISA-method. In the result of investigation and discussion of obtained data it was concluded that both type of the immune biosensors are able to fulfill modern practice demand in respect sensitivity, rapidity, simplicity and cheapness of analysis.
Ghanegolmohammadi, Farzan; Yoshida, Mitsunori; Ohnuki, Shinsuke; Sukegawa, Yuko; Okada, Hiroki; Obara, Keisuke; Kihara, Akio; Suzuki, Kuninori; Kojima, Tetsuya; Yachie, Nozomu; Hirata, Dai; Ohya, Yoshikazu
2017-01-01
We investigated the global landscape of Ca2+ homeostasis in budding yeast based on high-dimensional chemical-genetic interaction profiles. The morphological responses of 62 Ca2+-sensitive (cls) mutants were quantitatively analyzed with the image processing program CalMorph after exposure to a high concentration of Ca2+. After a generalized linear model was applied, an analysis of covariance model was used to detect significant Ca2+–cls interactions. We found that high-dimensional, morphological Ca2+–cls interactions were mixed with positive (86%) and negative (14%) chemical-genetic interactions, whereas one-dimensional fitness Ca2+–cls interactions were all negative in principle. Clustering analysis with the interaction profiles revealed nine distinct gene groups, six of which were functionally associated. In addition, characterization of Ca2+–cls interactions revealed that morphology-based negative interactions are unique signatures of sensitized cellular processes and pathways. Principal component analysis was used to discriminate between suppression and enhancement of the Ca2+-sensitive phenotypes triggered by inactivation of calcineurin, a Ca2+-dependent phosphatase. Finally, similarity of the interaction profiles was used to reveal a connected network among the Ca2+ homeostasis units acting in different cellular compartments. Our analyses of high-dimensional chemical-genetic interaction profiles provide novel insights into the intracellular network of yeast Ca2+ homeostasis. PMID:28566553
Algorithm for automatic analysis of electro-oculographic data
2013-01-01
Background Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. Methods The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. Results The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. Conclusion The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics. PMID:24160372
Algorithm for automatic analysis of electro-oculographic data.
Pettersson, Kati; Jagadeesan, Sharman; Lukander, Kristian; Henelius, Andreas; Haeggström, Edward; Müller, Kiti
2013-10-25
Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics.
Sulcal depth-based cortical shape analysis in normal healthy control and schizophrenia groups
NASA Astrophysics Data System (ADS)
Lyu, Ilwoo; Kang, Hakmook; Woodward, Neil D.; Landman, Bennett A.
2018-03-01
Sulcal depth is an important marker of brain anatomy in neuroscience/neurological function. Previously, sulcal depth has been explored at the region-of-interest (ROI) level to increase statistical sensitivity to group differences. In this paper, we present a fully automated method that enables inferences of ROI properties from a sulcal region- focused perspective consisting of two main components: 1) sulcal depth computation and 2) sulcal curve-based refined ROIs. In conventional statistical analysis, the average sulcal depth measurements are employed in several ROIs of the cortical surface. However, taking the average sulcal depth over the full ROI blurs overall sulcal depth measurements which may result in reduced sensitivity to detect sulcal depth changes in neurological and psychiatric disorders. To overcome such a blurring effect, we focus on sulcal fundic regions in each ROI by filtering out other gyral regions. Consequently, the proposed method results in more sensitive to group differences than a traditional ROI approach. In the experiment, we focused on a cortical morphological analysis to sulcal depth reduction in schizophrenia with a comparison to the normal healthy control group. We show that the proposed method is more sensitivity to abnormalities of sulcal depth in schizophrenia; sulcal depth is significantly smaller in most cortical lobes in schizophrenia compared to healthy controls (p < 0.05).
A special protection scheme utilizing trajectory sensitivity analysis in power transmission
NASA Astrophysics Data System (ADS)
Suriyamongkol, Dan
In recent years, new measurement techniques have provided opportunities to improve the North American Power System observability, control and protection. This dissertation discusses the formulation and design of a special protection scheme based on a novel utilization of trajectory sensitivity techniques with inputs consisting of system state variables and parameters. Trajectory sensitivity analysis (TSA) has been used in previous publications as a method for power system security and stability assessment, and the mathematical formulation of TSA lends itself well to some of the time domain power system simulation techniques. Existing special protection schemes often have limited sets of goals and control actions. The proposed scheme aims to maintain stability while using as many control actions as possible. The approach here will use the TSA in a novel way by using the sensitivities of system state variables with respect to state parameter variations to determine the state parameter controls required to achieve the desired state variable movements. The initial application will operate based on the assumption that the modeled power system has full system observability, and practical considerations will be discussed.
Higashi, Tatsuya; Ogawa, Shoujiro
2016-09-01
Sensitive and specific methods for the detection, characterization and quantification of endogenous steroids in body fluids or tissues are necessary for the diagnosis, pathological analysis and treatment of many diseases. Recently, liquid chromatography/electrospray ionization-tandem mass spectrometry (LC/ESI-MS/MS) has been widely used for these purposes due to its specificity and versatility. However, the ESI efficiency and fragmentation behavior of some steroids are poor, which lead to a low sensitivity. Chemical derivatization is one of the most effective methods to improve the detection characteristics of steroids in ESI-MS/MS. Based on this background, this article reviews the recent advances in chemical derivatization for the trace quantification of steroids in biological samples by LC/ESI-MS/MS. The derivatization in ESI-MS/MS is based on tagging a proton-affinitive or permanently charged moiety on the target steroid. Introduction/formation of a fragmentable moiety suitable for the selected reaction monitoring by the derivatization also enhances the sensitivity. The stable isotope-coded derivatization procedures for the steroid analysis are also described. Copyright © 2015 Elsevier Ltd. All rights reserved.
Han, Daehoon; Hong, Jinkee; Kim, Hyun Cheol; Sung, Jong Hwan; Lee, Jong Bum
2013-11-01
Many highly sensitive protein detection techniques have been developed and have played an important role in the analysis of proteins. Herein, we report a novel technique that can detect proteins sensitively and effectively using aptamer-based DNA nanostructures. Thrombin was used as a target protein and aptamer was used to capture fluorescent dye-labeled DNA nanobarcodes or thrombin on a microsphere. The captured DNA nanobarcodes were replaced by a thrombin and aptamer interaction. The detection ability of this approach was confirmed by flow cytometry with different concentrations of thrombin. Our detection method has great potential for rapid and simple protein detection with a variety of aptamers.
Theoretical study of surface plasmon resonance sensors based on 2D bimetallic alloy grating
NASA Astrophysics Data System (ADS)
Dhibi, Abdelhak; Khemiri, Mehdi; Oumezzine, Mohamed
2016-11-01
A surface plasmon resonance (SPR) sensor based on 2D alloy grating with a high performance is proposed. The grating consists of homogeneous alloys of formula MxAg1-x, where M is gold, copper, platinum and palladium. Compared to the SPR sensors based a pure metal, the sensor based on angular interrogation with silver exhibits a sharper (i.e. larger depth-to-width ratio) reflectivity dip, which provides a big detection accuracy, whereas the sensor based on gold exhibits the broadest dips and the highest sensitivity. The detection accuracy of SPR sensor based a metal alloy is enhanced by the increase of silver composition. In addition, the composition of silver which is around 0.8 improves the sensitivity and the quality of SPR sensor of pure metal. Numerical simulations based on rigorous coupled wave analysis (RCWA) show that the sensor based on a metal alloy not only has a high sensitivity and a high detection accuracy, but also exhibits a good linearity and a good quality.
NASA Astrophysics Data System (ADS)
Lu, Qianbo; Bai, Jian; Wang, Kaiwei; Lou, Shuqi; Jiao, Xufen; Han, Dandan
2016-10-01
Cross-sensitivity is a crucial parameter since it detrimentally affect the performance of an accelerometer, especially for a high resolution accelerometer. In this paper, a suite of analytical and finite-elements-method (FEM) models for characterizing the mechanism and features of the cross-sensitivity of a single-axis MOEMS accelerometer composed of a diffraction grating and a micromachined mechanical sensing chip are presented, which have not been systematically investigated yet. The mechanism and phenomena of the cross-sensitivity of this type MOEMS accelerometer based on diffraction grating differ quite a lot from the traditional ones owing to the identical sensing principle. By analyzing the models, some ameliorations and the modified design are put forward to suppress the cross-sensitivity. The modified design, achieved by double sides etching on a specific double-substrate-layer silicon-on-insulator (SOI) wafer, is validated to have a far smaller cross-sensitivity compared with the design previously reported in the literature. Moreover, this design can suppress the cross-sensitivity dramatically without compromising the acceleration sensitivity and resolution.
Alsaqa'aby, Mai F; Vaidya, Varun; Khreis, Noura; Khairallah, Thamer Al; Al-Jedai, Ahmed H
2017-01-01
Promising clinical and humanistic outcomes are associated with the use of new oral agents in the treatment of relapsing-remitting multiple sclerosis (RRMS). This is the first cost-effectiveness study comparing these medications in Saudi Arabia. We aimed to compare the cost-effectiveness of fingolimod, teriflunomide, dimethyl fumarate, and interferon (IFN)-b1a products (Avonex and Rebif) as first-line therapies in the treatment of patients with RRMS from a Saudi payer perspective. Cohort Simulation Model (Markov Model). Tertiary care hospital. A hypothetical cohort of 1000 RRMS Saudi patients was assumed to enter a Markov model model with a time horizon of 20 years and an annual cycle length. The model was developed based on an expanded disability status scale (EDSS) to evaluate the cost-effectiveness of the five disease-modifying drugs (DMDs) from a healthcare system perspective. Data on EDSS progression and relapse rates were obtained from the literature; cost data were obtained from King Faisal Specialist Hospital and Research Centre, Riyadh, Saudi Arabia. Results were expressed as incremental cost-effectiveness ratios (ICERs) and net monetary benefits (NMB) in Saudi Riyals and converted to equivalent $US. The base-case willingness-to-pay (WTP) threshold was assumed to be $100000 (SAR375000). One-way sensitivity analysis and probabilistic sensitivity analysis were conducted to test the robustness of the model. ICERs and NMB. The base-case analysis results showed Rebif as the optimal therapy at a WTP threshold of $100000. Avonex had the lowest ICER value of $337282/QALY when compared to Rebif. One-way sensitivity analysis demonstrated that the results were sensitive to utility weights of health state three and four and the cost of Rebif. None of the DMDs were found to be cost-effective in the treatment of RRMS at a WTP threshold of $100000 in this analysis. The DMDs would only be cost-effective at a WTP above $300000. The current analysis did not reflect the Saudi population preference in valuation of health states and did not consider the societal perspective in terms of cost.
Davenport, Tracey A; Burns, Jane M; Hickie, Ian B
2017-01-01
Background Web-based self-report surveying has increased in popularity, as it can rapidly yield large samples at a low cost. Despite this increase in popularity, in the area of youth mental health, there is a distinct lack of research comparing the results of Web-based self-report surveys with the more traditional and widely accepted computer-assisted telephone interviewing (CATI). Objective The Second Australian Young and Well National Survey 2014 sought to compare differences in respondent response patterns using matched items on CATI versus a Web-based self-report survey. The aim of this study was to examine whether responses varied as a result of item sensitivity, that is, the item’s susceptibility to exaggeration on underreporting and to assess whether certain subgroups demonstrated this effect to a greater extent. Methods A subsample of young people aged 16 to 25 years (N=101), recruited through the Second Australian Young and Well National Survey 2014, completed the identical items on two occasions: via CATI and via Web-based self-report survey. Respondents also rated perceived item sensitivity. Results When comparing CATI with the Web-based self-report survey, a Wilcoxon signed-rank analysis showed that respondents answered 14 of the 42 matched items in a significantly different way. Significant variation in responses (CATI vs Web-based) was more frequent if the item was also rated by the respondents as highly sensitive in nature. Specifically, 63% (5/8) of the high sensitivity items, 43% (3/7) of the neutral sensitivity items, and 0% (0/4) of the low sensitivity items were answered in a significantly different manner by respondents when comparing their matched CATI and Web-based question responses. The items that were perceived as highly sensitive by respondents and demonstrated response variability included the following: sexting activities, body image concerns, experience of diagnosis, and suicidal ideation. For high sensitivity items, a regression analysis showed respondents who were male (beta=−.19, P=.048) or who were not in employment, education, or training (NEET; beta=−.32, P=.001) were significantly more likely to provide different responses on matched items when responding in the CATI as compared with the Web-based self-report survey. The Web-based self-report survey, however, demonstrated some evidence of avidity and attrition bias. Conclusions Compared with CATI, Web-based self-report surveys are highly cost-effective and had higher rates of self-disclosure on sensitive items, particularly for respondents who identify as male and NEET. A drawback to Web-based surveying methodologies, however, includes the limited control over avidity bias and the greater incidence of attrition bias. These findings have important implications for further development of survey methods in the area of health and well-being, especially when considering research topics (in this case diagnosis, suicidal ideation, sexting, and body image) and groups that are being recruited (young people, males, and NEET). PMID:28951382
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.
BEATBOX v1.0: Background Error Analysis Testbed with Box Models
NASA Astrophysics Data System (ADS)
Knote, Christoph; Barré, Jérôme; Eckl, Max
2018-02-01
The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736
Investigating and understanding fouling in a planar setup using ultrasonic methods.
Wallhäusser, E; Hussein, M A; Becker, T
2012-09-01
Fouling is an unwanted deposit on heat transfer surfaces and occurs regularly in foodstuff heat exchangers. Fouling causes high costs because cleaning of heat exchangers has to be carried out and cleaning success cannot easily be monitored. Thus, used cleaning cycles in foodstuff industry are usually too long leading to high costs. In this paper, a setup is described with which it is possible, first, to produce dairy protein fouling similar to the one found in industrial heat exchangers and, second, to detect the presence and absence of such fouling using an ultrasonic based measuring method. The developed setup resembles a planar heat exchanger in which fouling can be made and cleaned reproducible. Fouling presence, absence, and cleaning progress can be monitored by using an ultrasonic detection unit. The setup is described theoretically based on electrical and mechanical lumped circuits to derive the wave equation and the transfer function to perform a sensitivity analysis. Sensitivity analysis was done to determine influencing quantities and showed that fouling is measurable. Also, first experimental results are compared with results from sensitivity analysis.
NASA Astrophysics Data System (ADS)
Choi, Jongwan; Kim, Felix Sunjoo
2018-03-01
We studied the influence of photoanode thickness on the photovoltaic characteristics and impedance responses of the dye-sensitized solar cells based on a ruthenium dye containing a hexyloxyl-substituted carbazole unit (Ru-HCz). As the thickness of photoanode increases from 4.2 μm to 14.8 μm, the dye-loading amount and the efficiency increase. The device with thicker photoanode shows a decrease in the efficiency due to the higher probability of recombination of electron-hole pairs before charge extraction. We also analyzed the electron-transfer and recombination characteristics as a function of photoanode thickness through detailed electrochemical impedance spectroscopy analysis.
NASA Astrophysics Data System (ADS)
Oliveira, Miguel; Santos, Cristina P.; Costa, Lino
2012-09-01
In this paper, a study based on sensitivity analysis is performed for a gait multi-objective optimization system that combines bio-inspired Central Patterns Generators (CPGs) and a multi-objective evolutionary algorithm based on NSGA-II. In this system, CPGs are modeled as autonomous differential equations, that generate the necessary limb movement to perform the required walking gait. In order to optimize the walking gait, a multi-objective problem with three conflicting objectives is formulated: maximization of the velocity, the wide stability margin and the behavioral diversity. The experimental results highlight the effectiveness of this multi-objective approach and the importance of the objectives to find different walking gait solutions for the quadruped robot.
White, Christopher R H; Doherty, Dorota A; Cannon, Jeffrey W; Kohan, Rolland; Newnham, John P; Pennell, Craig E
2016-07-01
There is an increasing body of literature supporting universal umbilical cord blood gas analysis (UCBGA) into all maternity units. A significant impediment to UCBGA's introduction is the perceived expense of the introduction and associated ongoing costs. Consequently, this study set out to conduct the first cost-effectiveness analysis of introducing universal UCBGA. Analysis was based on 42,100 consecutive deliveries ≥23 weeks of gestation at a single tertiary obstetric unit. Within 4 years of UCBGA's introduction there was a 45% reduction in term special care nursery (SCN) admissions >2499 g. Incurred costs included initial and ongoing costs associated with universal UCBGA. Averted costs were based on local diagnosis-related grouping costs for reduction in term SCN admissions. Incremental cost-effectiveness ratio (ICER) and sensitivity analysis results were reported. Under the base-case scenario, the adoption of universal UCBGA was less costly and more effective than selective UCBGA over 4 years and resulted in saving of AU$641,532 while adverting 376 SCN admissions. Sensitivity analysis showed that UCBGA was cost-effective in 51.8%, 83.3%, 99.6% and 100% of simulations in years 1, 2, 3 and 4. These conclusions were not sensitive to wide, clinically possible variations in parameter values for neonatal intensive care unit and SCN admissions, magnitude of averted SCN admissions, cumulative delivery numbers, and SCN admission costs. Universal UCBGA is associated with significant initial and ongoing costs; however, potential averted costs (due to reduced SCN admissions) exceed incurred costs in most scenarios.
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; ...
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.
Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit
NASA Astrophysics Data System (ADS)
Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie
2015-09-01
The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity indexes values of four measurable parameters, such as supply pressure, proportional gain, initial position of servo cylinder piston and load force, are verified experimentally on test platform of hydraulic drive unit, and the experimental research shows that the sensitivity analysis results obtained through simulation are approximate to the test results. This research indicates each parameter sensitivity characteristics of hydraulic drive unit, the performance-affected main parameters and secondary parameters are got under different working conditions, which will provide the theoretical foundation for the control compensation and structure optimization of hydraulic drive unit.
Improvement of the cost-benefit analysis algorithm for high-rise construction projects
NASA Astrophysics Data System (ADS)
Gafurov, Andrey; Skotarenko, Oksana; Plotnikov, Vladimir
2018-03-01
The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the "Project analysis scenario" flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.
Nshimyumukiza, Léon; Douville, Xavier; Fournier, Diane; Duplantie, Julie; Daher, Rana K; Charlebois, Isabelle; Longtin, Jean; Papenburg, Jesse; Guay, Maryse; Boissinot, Maurice; Bergeron, Michel G; Boudreau, Denis; Gagné, Christian; Rousseau, François; Reinharz, Daniel
2016-03-01
A point-of-care rapid test (POCRT) may help early and targeted use of antiviral drugs for the management of influenza A infection. (i) To determine whether antiviral treatment based on a POCRT for influenza A is cost-effective and, (ii) to determine the thresholds of key test parameters (sensitivity, specificity and cost) at which a POCRT based-strategy appears to be cost effective. An hybrid « susceptible, infected, recovered (SIR) » compartmental transmission and Markov decision analytic model was used to simulate the cost-effectiveness of antiviral treatment based on a POCRT for influenza A in the social perspective. Data input parameters used were retrieved from peer-review published studies and government databases. The outcome considered was the incremental cost per life-year saved for one seasonal influenza season. In the base-case analysis, the antiviral treatment based on POCRT saves 2 lives/100,000 person-years and costs $7600 less than the empirical antiviral treatment based on clinical judgment alone, which demonstrates that the POCRT-based strategy is dominant. In one and two way-sensitivity analyses, results were sensitive to the POCRT accuracy and cost, to the vaccination coverage as well as to the prevalence of influenza A. In probabilistic sensitivity analyses, the POCRT strategy is cost-effective in 66% of cases, for a commonly accepted threshold of $50,000 per life-year saved. The influenza antiviral treatment based on POCRT could be cost-effective in specific conditions of performance, price and disease prevalence. © 2015 The Authors. Influenza and Other Respiratory Viruses Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Bennett, Katrina E.; Urrego Blanco, Jorge R.; Jonko, Alexandra; Bohn, Theodore J.; Atchley, Adam L.; Urban, Nathan M.; Middleton, Richard S.
2018-01-01
The Colorado River Basin is a fundamentally important river for society, ecology, and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent, and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model. We combine global sensitivity analysis with a space-filling Latin Hypercube Sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach. We find that snow-dominated regions are much more sensitive to uncertainties in VIC parameters. Although baseflow and runoff changes respond to parameters used in previous sensitivity studies, we discover new key parameter sensitivities. For instance, changes in runoff and evapotranspiration are sensitive to albedo, while changes in snow water equivalent are sensitive to canopy fraction and Leaf Area Index (LAI) in the VIC model. It is critical for improved modeling to narrow uncertainty in these parameters through improved observations and field studies. This is important because LAI and albedo are anticipated to change under future climate and narrowing uncertainty is paramount to advance our application of models such as VIC for water resource management.
Conversion of paper sludge to ethanol, II: process design and economic analysis.
Fan, Zhiliang; Lynd, Lee R
2007-01-01
Process design and economics are considered for conversion of paper sludge to ethanol. A particular site, a bleached kraft mill operated in Gorham, NH by Fraser Papers (15 tons dry sludge processed per day), is considered. In addition, profitability is examined for a larger plant (50 dry tons per day) and sensitivity analysis is carried out with respect to capacity, tipping fee, and ethanol price. Conversion based on simultaneous saccharification and fermentation with intermittent feeding is examined, with ethanol recovery provided by distillation and molecular sieve adsorption. It was found that the Fraser plant achieves positive cash flow with or without xylose conversion and mineral recovery. Sensitivity analysis indicates economics are very sensitive to ethanol selling price and scale; significant but less sensitive to the tipping fee, and rather insensitive to the prices of cellulase and power. Internal rates of return exceeding 15% are projected for larger plants at most combinations of scale, tipping fee, and ethanol price. Our analysis lends support to the proposition that paper sludge is a leading point-of-entry and proving ground for emergent industrial processes featuring enzymatic hydrolysis of cellulosic biomass.
Rifai Chai; Naik, Ganesh R; Tran, Yvonne; Sai Ho Ling; Craig, Ashley; Nguyen, Hung T
2015-08-01
An electroencephalography (EEG)-based counter measure device could be used for fatigue detection during driving. This paper explores the classification of fatigue and alert states using power spectral density (PSD) as a feature extractor and fuzzy swarm based-artificial neural network (ANN) as a classifier. An independent component analysis of entropy rate bound minimization (ICA-ERBM) is investigated as a novel source separation technique for fatigue classification using EEG analysis. A comparison of the classification accuracy of source separator versus no source separator is presented. Classification performance based on 43 participants without the inclusion of the source separator resulted in an overall sensitivity of 71.67%, a specificity of 75.63% and an accuracy of 73.65%. However, these results were improved after the inclusion of a source separator module, resulting in an overall sensitivity of 78.16%, a specificity of 79.60% and an accuracy of 78.88% (p <; 0.05).
NASA Astrophysics Data System (ADS)
Haghnegahdar, Amin; Elshamy, Mohamed; Yassin, Fuad; Razavi, Saman; Wheater, Howard; Pietroniro, Al
2017-04-01
Complex physically-based environmental models are being increasingly used as the primary tool for watershed planning and management due to advances in computation power and data acquisition. Model sensitivity analysis plays a crucial role in understanding the behavior of these complex models and improving their performance. Due to the non-linearity and interactions within these complex models, Global sensitivity analysis (GSA) techniques should be adopted to provide a comprehensive understanding of model behavior and identify its dominant controls. In this study we adopt a multi-basin multi-criteria GSA approach to systematically assess the behavior of the Modélisation Environmentale-Surface et Hydrologie (MESH) across various hydroclimatic conditions in Canada including areas in the Great Lakes Basin, Mackenzie River Basin, and South Saskatchewan River Basin. MESH is a semi-distributed physically-based coupled land surface-hydrology modelling system developed by Environment and Climate Change Canada (ECCC) for various water resources management purposes in Canada. We use a novel method, called Variogram Analysis of Response Surfaces (VARS), to perform sensitivity analysis. VARS is a variogram-based GSA technique that can efficiently provide a spectrum of sensitivity information across a range of scales within the parameter space. We use multiple metrics to identify dominant controls of model response (e.g. streamflow) to model parameters under various conditions such as high flows, low flows, and flow volume. We also investigate the influence of initial conditions on model behavior as part of this study. Our preliminary results suggest that this type of GSA can significantly help with estimating model parameters, decreasing calibration computational burden, and reducing prediction uncertainty.
Yang, Zhen; Zhi, Shaotao; Feng, Zhu; Lei, Chong; Zhou, Yong
2018-01-01
A sensitive and innovative assay system based on a micro-MEMS-fluxgate sensor and immunomagnetic beads-labels was developed for the rapid analysis of C-reactive proteins (CRP). The fluxgate sensor presented in this study was fabricated through standard micro-electro-mechanical system technology. A multi-loop magnetic core made of Fe-based amorphous ribbon was employed as the sensing element, and 3-D solenoid copper coils were used to control the sensing core. Antibody-conjugated immunomagnetic microbeads were strategically utilized as signal tags to label the CRP via the specific conjugation of CRP to polyclonal CRP antibodies. Separate Au film substrates were applied as immunoplatforms to immobilize CRP-beads labels through classical sandwich assays. Detection and quantification of the CRP at different concentrations were implemented by detecting the stray field of CRP labeled magnetic beads using the newly-developed micro-fluxgate sensor. The resulting system exhibited the required sensitivity, stability, reproducibility, and selectivity. A detection limit as low as 0.002 μg/mL CRP with a linearity range from 0.002 μg/mL to 10 μg/mL was achieved, and this suggested that the proposed biosystem possesses high sensitivity. In addition to the extremely low detection limit, the proposed method can be easily manipulated and possesses a quick response time. The response time of our sensor was less than 5 s, and the entire detection period for CRP analysis can be completed in less than 30 min using the current method. Given the detection performance and other advantages such as miniaturization, excellent stability and specificity, the proposed biosensor can be considered as a potential candidate for the rapid analysis of CRP, especially for point-of-care platforms. PMID:29601593
Sensitivity curves for searches for gravitational-wave backgrounds
NASA Astrophysics Data System (ADS)
Thrane, Eric; Romano, Joseph D.
2013-12-01
We propose a graphical representation of detector sensitivity curves for stochastic gravitational-wave backgrounds that takes into account the increase in sensitivity that comes from integrating over frequency in addition to integrating over time. This method is valid for backgrounds that have a power-law spectrum in the analysis band. We call these graphs “power-law integrated curves.” For simplicity, we consider cross-correlation searches for unpolarized and isotropic stochastic backgrounds using two or more detectors. We apply our method to construct power-law integrated sensitivity curves for second-generation ground-based detectors such as Advanced LIGO, space-based detectors such as LISA and the Big Bang Observer, and timing residuals from a pulsar timing array. The code used to produce these plots is available at https://dcc.ligo.org/LIGO-P1300115/public for researchers interested in constructing similar sensitivity curves.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Hutsell, Blake A; Negus, S Stevens; Banks, Matthew L
2015-01-01
We have previously demonstrated reductions in cocaine choice produced by either continuous 14-day phendimetrazine and d-amphetamine treatment or removing cocaine availability under a cocaine vs. food choice procedure in rhesus monkeys. The aim of the present investigation was to apply the concatenated generalized matching law (GML) to cocaine vs. food choice dose-effect functions incorporating sensitivity to both the relative magnitude and price of each reinforcer. Our goal was to determine potential behavioral mechanisms underlying pharmacological treatment efficacy to decrease cocaine choice. A multi-model comparison approach was used to characterize dose- and time-course effects of both pharmacological and environmental manipulations on sensitivity to reinforcement. GML models provided an excellent fit of the cocaine choice dose-effect functions in individual monkeys. Reductions in cocaine choice by both pharmacological and environmental manipulations were principally produced by systematic decreases in sensitivity to reinforcer price and non-systematic changes in sensitivity to reinforcer magnitude. The modeling approach used provides a theoretical link between the experimental analysis of choice and pharmacological treatments being evaluated as candidate 'agonist-based' medications for cocaine addiction. The analysis suggests that monoamine releaser treatment efficacy to decrease cocaine choice was mediated by selectively increasing the relative price of cocaine. Overall, the net behavioral effect of these pharmacological treatments was to increase substitutability of food pellets, a nondrug reinforcer, for cocaine. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Guerriero, Carla; Cairns, John; Roberts, Ian; Rodgers, Anthony; Whittaker, Robyn; Free, Caroline
2013-10-01
The txt2stop trial has shown that mobile-phone-based smoking cessation support doubles biochemically validated quitting at 6 months. This study examines the cost-effectiveness of smoking cessation support delivered by mobile phone text messaging. The lifetime incremental costs and benefits of adding text-based support to current practice are estimated from a UK NHS perspective using a Markov model. The cost-effectiveness was measured in terms of cost per quitter, cost per life year gained and cost per QALY gained. As in previous studies, smokers are assumed to face a higher risk of experiencing the following five diseases: lung cancer, stroke, myocardial infarction, chronic obstructive pulmonary disease, and coronary heart disease (i.e. the main fatal or disabling, but by no means the only, adverse effects of prolonged smoking). The treatment costs and health state values associated with these diseases were identified from the literature. The analysis was based on the age and gender distribution observed in the txt2stop trial. Effectiveness and cost parameters were varied in deterministic sensitivity analyses, and a probabilistic sensitivity analysis was also performed. The cost of text-based support per 1,000 enrolled smokers is £16,120, which, given an estimated 58 additional quitters at 6 months, equates to £278 per quitter. However, when the future NHS costs saved (as a result of reduced smoking) are included, text-based support would be cost saving. It is estimated that 18 LYs are gained per 1,000 smokers (0.3 LYs per quitter) receiving text-based support, and 29 QALYs are gained (0.5 QALYs per quitter). The deterministic sensitivity analysis indicated that changes in individual model parameters did not alter the conclusion that this is a cost-effective intervention. Similarly, the probabilistic sensitivity analysis indicated a >90 % chance that the intervention will be cost saving. This study shows that under a wide variety of conditions, personalised smoking cessation advice and support by mobile phone message is both beneficial for health and cost saving to a health system.
Horita, Nobuyuki; Miyazawa, Naoki; Kojima, Ryota; Kimura, Naoko; Inoue, Miyo; Ishigatsubo, Yoshiaki; Kaneko, Takeshi
2013-11-01
Studies on the sensitivity and specificity of the Binax Now Streptococcus pneumonia urinary antigen test (index test) show considerable variance of results. Those written in English provided sufficient original data to evaluate the sensitivity and specificity of the index test using unconcentrated urine to identify S. pneumoniae infection in adults with pneumonia. Reference tests were conducted with at least one culture and/or smear. We estimated sensitivity and two specificities. One was the specificity evaluated using only patients with pneumonia of identified other aetiologies ('specificity (other)'). The other was the specificity evaluated based on both patients with pneumonia of unknown aetiology and those with pneumonia of other aetiologies ('specificity (unknown and other)') using a fixed model for meta-analysis. We found 10 articles involving 2315 patients. The analysis of 10 studies involving 399 patients yielded a pooled sensitivity of 0.75 (95% confidence interval: 0.71-0.79) without heterogeneity or publication bias. The analysis of six studies involving 258 patients yielded a pooled specificity (other) of 0.95 (95% confidence interval: 0.92-0.98) without no heterogeneity or publication bias. We attempted to conduct a meta-analysis with the 10 studies involving 1916 patients to estimate specificity (unknown and other), but it remained unclear due to moderate heterogeneity and possible publication bias. In our meta-analysis, sensitivity of the index test was moderate and specificity (other) was high; however, the specificity (unknown and other) remained unclear. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.
Basic research for the geodynamics program
NASA Technical Reports Server (NTRS)
1991-01-01
The mathematical models of space very long base interferometry (VLBI) observables suitable for least squares covariance analysis were derived and estimatability problems inherent in the space VLBI system were explored, including a detailed rank defect analysis and sensitivity analysis. An important aim is to carry out a comparative analysis of the mathematical models of the ground-based VLBI and space VLBI observables in order to describe the background in detail. Computer programs were developed in order to check the relations, assess errors, and analyze sensitivity. In order to investigate the estimatability of different geodetic and geodynamic parameters from the space VLBI observables, the mathematical models for time delay and time delay rate observables of space VLBI were analytically derived along with the partial derivatives with respect to the parameters. Rank defect analysis was carried out both by analytical and numerical testing of linear dependencies between the columns of the normal matrix thus formed. Definite conclusions were formed about the rank defects in the system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less
NASA Technical Reports Server (NTRS)
Badhwar, G. D.; Macdonald, R. B.; Hall, F. G.; Carnes, J. G.
1986-01-01
Results from analysis of a data set of simultaneous measurements of Thematic Mapper band reflectance and leaf area index are presented. The measurements were made over pure stands of Aspen in the Superior National Forest of northern Minnesota. The analysis indicates that the reflectance may be sensitive to the leaf area index of the Aspen early in the season. The sensitivity disappears as the season progresses. Based on the results of model calculations, an explanation for the observed relationship is developed. The model calculations indicate that the sensitivity of the reflectance to the Aspen overstory depends on the amount of understory present.
NASA Astrophysics Data System (ADS)
Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.
2018-03-01
We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.
NASA Astrophysics Data System (ADS)
Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye
2016-03-01
Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.
NASA Technical Reports Server (NTRS)
Gaebler, John A.; Tolson, Robert H.
2010-01-01
In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.
NASA Astrophysics Data System (ADS)
Park, Chanho; Nguyen, Phung K. T.; Nam, Myung Jin; Kim, Jongwook
2013-04-01
Monitoring CO2 migration and storage in geological formations is important not only for the stability of geological sequestration of CO2 but also for efficient management of CO2 injection. Especially, geophysical methods can make in situ observation of CO2 to assess the potential leakage of CO2 and to improve reservoir description as well to monitor development of geologic discontinuity (i.e., fault, crack, joint, etc.). Geophysical monitoring can be based on wireline logging or surface surveys for well-scale monitoring (high resolution and nallow area of investigation) or basin-scale monitoring (low resolution and wide area of investigation). In the meantime, crosswell tomography can make reservoir-scale monitoring to bridge the resolution gap between well logs and surface measurements. This study focuses on reservoir-scale monitoring based on crosswell seismic tomography aiming describe details of reservoir structure and monitoring migration of reservoir fluid (water and CO2). For the monitoring, we first make a sensitivity analysis on crosswell seismic tomography data with respect to CO2 saturation. For the sensitivity analysis, Rock Physics Models (RPMs) are constructed by calculating the values of density and P and S-wave velocities of a virtual CO2 injection reservoir. Since the seismic velocity of the reservoir accordingly changes as CO2 saturation changes when the CO2 saturation is less than about 20%, while when the CO2 saturation is larger than 20%, the seismic velocity is insensitive to the change, sensitivity analysis is mainly made when CO2 saturation is less than 20%. For precise simulation of seismic tomography responses for constructed RPMs, we developed a time-domain 2D elastic modeling based on finite difference method with a staggered grid employing a boundary condition of a convolutional perfectly matched layer. We further make comparison between sensitivities of seismic tomography and surface measurements for RPMs to analysis resolution difference between them. Moreover, assuming a similar reservoir situation to the CO2 storage site in Nagaoka, Japan, we generate time-lapse tomographic data sets for the corresponding CO2 injection process, and make a preliminary interpretation of the data sets.
Dynamic sensitivity analysis of biological systems
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2008-01-01
Background A mathematical model to understand, predict, control, or even design a real biological system is a central theme in systems biology. A dynamic biological system is always modeled as a nonlinear ordinary differential equation (ODE) system. How to simulate the dynamic behavior and dynamic parameter sensitivities of systems described by ODEs efficiently and accurately is a critical job. In many practical applications, e.g., the fed-batch fermentation systems, the system admissible input (corresponding to independent variables of the system) can be time-dependent. The main difficulty for investigating the dynamic log gains of these systems is the infinite dimension due to the time-dependent input. The classical dynamic sensitivity analysis does not take into account this case for the dynamic log gains. Results We present an algorithm with an adaptive step size control that can be used for computing the solution and dynamic sensitivities of an autonomous ODE system simultaneously. Although our algorithm is one of the decouple direct methods in computing dynamic sensitivities of an ODE system, the step size determined by model equations can be used on the computations of the time profile and dynamic sensitivities with moderate accuracy even when sensitivity equations are more stiff than model equations. To show this algorithm can perform the dynamic sensitivity analysis on very stiff ODE systems with moderate accuracy, it is implemented and applied to two sets of chemical reactions: pyrolysis of ethane and oxidation of formaldehyde. The accuracy of this algorithm is demonstrated by comparing the dynamic parameter sensitivities obtained from this new algorithm and from the direct method with Rosenbrock stiff integrator based on the indirect method. The same dynamic sensitivity analysis was performed on an ethanol fed-batch fermentation system with a time-varying feed rate to evaluate the applicability of the algorithm to realistic models with time-dependent admissible input. Conclusion By combining the accuracy we show with the efficiency of being a decouple direct method, our algorithm is an excellent method for computing dynamic parameter sensitivities in stiff problems. We extend the scope of classical dynamic sensitivity analysis to the investigation of dynamic log gains of models with time-dependent admissible input. PMID:19091016
Desland, Fiona A; Afzal, Aqeela; Warraich, Zuha; Mocco, J
2014-01-01
Animal models of stroke have been crucial in advancing our understanding of the pathophysiology of cerebral ischemia. Currently, the standards for determining neurological deficit in rodents are the Bederson and Garcia scales, manual assessments scoring animals based on parameters ranked on a narrow scale of severity. Automated open field analysis of a live-video tracking system that analyzes animal behavior may provide a more sensitive test. Results obtained from the manual Bederson and Garcia scales did not show significant differences between pre- and post-stroke animals in a small cohort. When using the same cohort, however, post-stroke data obtained from automated open field analysis showed significant differences in several parameters. Furthermore, large cohort analysis also demonstrated increased sensitivity with automated open field analysis versus the Bederson and Garcia scales. These early data indicate use of automated open field analysis software may provide a more sensitive assessment when compared to traditional Bederson and Garcia scales.
NASA Astrophysics Data System (ADS)
Demaria, Eleonora M.; Nijssen, Bart; Wagener, Thorsten
2007-06-01
Current land surface models use increasingly complex descriptions of the processes that they represent. Increase in complexity is accompanied by an increase in the number of model parameters, many of which cannot be measured directly at large spatial scales. A Monte Carlo framework was used to evaluate the sensitivity and identifiability of ten parameters controlling surface and subsurface runoff generation in the Variable Infiltration Capacity model (VIC). Using the Monte Carlo Analysis Toolbox (MCAT), parameter sensitivities were studied for four U.S. watersheds along a hydroclimatic gradient, based on a 20-year data set developed for the Model Parameter Estimation Experiment (MOPEX). Results showed that simulated streamflows are sensitive to three parameters when evaluated with different objective functions. Sensitivity of the infiltration parameter (b) and the drainage parameter (exp) were strongly related to the hydroclimatic gradient. The placement of vegetation roots played an important role in the sensitivity of model simulations to the thickness of the second soil layer (thick2). Overparameterization was found in the base flow formulation indicating that a simplified version could be implemented. Parameter sensitivity was more strongly dictated by climatic gradients than by changes in soil properties. Results showed how a complex model can be reduced to a more parsimonious form, leading to a more identifiable model with an increased chance of successful regionalization to ungauged basins. Although parameter sensitivities are strictly valid for VIC, this model is representative of a wider class of macroscale hydrological models. Consequently, the results and methodology will have applicability to other hydrological models.
Design and analysis of a silicon-based antiresonant reflecting optical waveguide chemical sensor
NASA Astrophysics Data System (ADS)
Remley, Kate A.; Weisshaar, Andreas
1996-08-01
The design of a silicon-based antiresonant reflecting optical waveguide (ARROW) chemical sensor is presented, and its theoretical performance is compared with that of a conventional structure. The use of an ARROW structure permits incorporation of a thick guiding region for efficient coupling to a single-mode fiber. A high-index overlay is added to fine tune the sensitivity of the ARROW chemical sensor. The sensitivity of the sensor is presented, and design trade-offs are discussed.
The management of patients with T1 adenocarcinoma of the low rectum: a decision analysis.
Johnston, Calvin F; Tomlinson, George; Temple, Larissa K; Baxter, Nancy N
2013-04-01
Decision making for patients with T1 adenocarcinoma of the low rectum, when treatment options are limited to a transanal local excision or abdominoperineal resection, is challenging. The aim of this study was to develop a contemporary decision analysis to assist patients and clinicians in balancing the goals of maximizing life expectancy and quality of life in this situation. We constructed a Markov-type microsimulation in open-source software. Recurrence rates and quality-of-life parameters were elicited by systematic literature reviews. Sensitivity analyses were performed on key model parameters. Our base case for analysis was a 65-year-old man with low-lying T1N0 rectal cancer. We determined the sensitivity of our model for sex, age up to 80, and T stage. The main outcome measured was quality-adjusted life-years. In the base case, selecting transanal local excision over abdominoperineal resection resulted in a loss of 0.53 years of life expectancy but a gain of 0.97 quality-adjusted life-years. One-way sensitivity analysis demonstrated a health state utility value threshold for permanent colostomy of 0.93. This value ranged from 0.88 to 1.0 based on tumor recurrence risk. There were no other model sensitivities. Some model parameter estimates were based on weak data. In our model, transanal local excision was found to be the preferable approach for most patients. An abdominoperineal resection has a 3.5% longer life expectancy, but this advantage is lost when the quality-of-life reduction reported by stoma patients is weighed in. The minority group in whom abdominoperineal resection is preferred are those who are unwilling to sacrifice 7% of their life expectancy to avoid a permanent stoma. This is estimated to be approximately 25% of all patients. The threshold increases to 12% of life expectancy in high-risk tumors. No other factors are found to be relevant to the decision.
Features of the incorporation of single and double based powders within emulsion explosives
NASA Astrophysics Data System (ADS)
Ribeiro, J. B.; Mendes, R.; Tavares, B.; Louro, C.
2014-05-01
In this work, features of the thermal and detonation behaviour of compositions resulting from the mixture of single and double based powders within ammonium nitrate based emulsion explosives are shown. Those features are portrayed through results of thermodynamic-equilibrium calculations of the detonation velocity, the chemical compatibility assessment through differential thermal analysis [DTA] and thermo gravimetric analysis [TGA], the experimental determination of the detonation velocity and a comparative evaluation of the shock sensitivity using a modified version of the "gap-test". DTA/TGA results for the compositions and for the individual components overlap until the beginning of the thermal decomposition which is an indication of the absence of formation of any new chemical species and so of the compatibility of the components of the compositions. After the beginning of the thermal decomposition it can be seen that the rate of mass loss is much higher for the compositions with powder than for the one with sole emulsion explosive. Both, theoretical and experimental, values of the detonation velocity have been shown to be higher for the powdered compositions than for the sole emulsion explosive. Shock sensitivity assessments have ended-up with a slightly bigger sensitivity for the compositions with double based powder when compared to the single based compositions or to the sole emulsion.
Wong, Melody Yee-Man; Man, Sin-Heng; Che, Chi-Ming; Lau, Kai-Chung; Ng, Kwan-Ming
2014-03-21
The simplicity and easy manipulation of a porous substrate-based ESI-MS technique have been widely applied to the direct analysis of different types of samples in positive ion mode. However, the study and application of this technique in negative ion mode are sparse. A key challenge could be due to the ease of electrical discharge on supporting tips upon the application of negative voltage. The aim of this study is to investigate the effect of supporting materials, including polyester, polyethylene and wood, on the detection sensitivity of a porous substrate-based negative ESI-MS technique. By using nitrobenzene derivatives and nitrophenol derivatives as the target analytes, it was found that the hydrophobic materials (i.e., polyethylene and polyester) with a higher tendency to accumulate negative charge could enhance the detection sensitivity towards nitrobenzene derivatives via electron-capture ionization; whereas, compounds with electron affinities lower than the cut-off value (1.13 eV) were not detected. Nitrophenol derivatives with pKa smaller than 9.0 could be detected in the form of deprotonated ions; whereas polar materials (i.e., wood), which might undergo competitive deprotonation with the analytes, could suppress the detection sensitivity. With the investigation of the material effects on the detection sensitivity, the porous substrate-based negative ESI-MS method was developed and applied to the direct detection of two commonly encountered explosives in complex samples.
Mallinckrodt, C H; Lin, Q; Molenberghs, M
2013-01-01
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.
Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models
Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.
2014-01-01
This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.
Systems Analysis Of Advanced Coal-Based Power Plants
NASA Technical Reports Server (NTRS)
Ferrall, Joseph F.; Jennings, Charles N.; Pappano, Alfred W.
1988-01-01
Report presents appraisal of integrated coal-gasification/fuel-cell power plants. Based on study comparing fuel-cell technologies with each other and with coal-based alternatives and recommends most promising ones for research and development. Evaluates capital cost, cost of electricity, fuel consumption, and conformance with environmental standards. Analyzes sensitivity of cost of electricity to changes in fuel cost, to economic assumptions, and to level of technology. Recommends further evaluation of integrated coal-gasification/fuel-cell integrated coal-gasification/combined-cycle, and pulverized-coal-fired plants. Concludes with appendixes detailing plant-performance models, subsystem-performance parameters, performance goals, cost bases, plant-cost data sheets, and plant sensitivity to fuel-cell performance.
A Quad-Cantilevered Plate micro-sensor for intracranial pressure measurement.
Lalkov, Vasko; Qasaimeh, Mohammad A
2017-07-01
This paper proposes a new design for pressure-sensing micro-plate platform to bring higher sensitivity to a pressure sensor based on piezoresistive MEMS sensing mechanism. The proposed design is composed of a suspended plate having four stepped cantilever beams connected to its corners, and thus defined as Quad-Cantilevered Plate (QCP). Finite element analysis was performed to determine the optimal design for sensitivity and structural stability under a range of applied forces. Furthermore, a piezoresistive analysis was performed to calculate sensor sensitivity. Both the maximum stress and the change in resistance of the piezoresistor associated with the QCP were found to be higher compared to previously published designs, and linearly related to the applied pressure as desired. Therefore, the QCP demonstrates greater sensitivity, and could be potentially used as an efficient pressure sensor for intracranial pressure measurement.
Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...
2015-12-04
Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less
Singh, Satbir; Raj, Tilak; Singh, Amarpal; Kaur, Navneet
2016-06-01
The present research work describes the comparative analysis and performance characteristics of 4-pyridine based monomer and polymer capped ZnO dye-sensitized solar cells. The N, N-dimethyl-N4-((pyridine-4yl)methylene) propaneamine (4,monomer) and polyamine-4-pyridyl Schiff base (5, polymer) dyes were synthesized through one step condensation reaction between 4-pyridinecarboxaldehyde 1 and N, N-dimethylpropylamine 2/polyamine 3. Products obtained N, N-dimethyl-N4-((pyridine-4yl)methylene)propaneamine (4) and polyamine-4-pyridyl Schiff base (5) were purified and characterized using 1H, 13C NMR, mass, IR and CHN spectroscopy. Both the dyes 4 and 5 were further coated over ZnO nanoparticles and characterized using SEM, DLS and XRD analysis. Absorption profile and emission profile was monitored using fluorescence and UV-Vis absorption spectroscopy. A thick layer of these inbuilt dye linked ZnO nanoparticles of dyes (4) and (5) was pasted on one of the conductive side of ITO glass followed with a liquid electrolyte and counter electrode of the same conductive glass. Polyamine-4-pyridyl Schiff base polymer (5) decorated dye sensitized solar cell has shown better exciting photovoltaic properties in the form of short circuit current density (J(sc) = 6.3 mA/cm2), open circuit photo voltage (V(oc) = 0.7 V), fill factor (FF = 0.736) than monomer decorated dye sensitized solar cell. Polymer dye (5) based ZnO solar cell has shown a maximum solar power to electrical conversion efficiency of 3.25%, which is enhanced by 2.16% in case of monomer dye based ZnO solar cell under AM 1.5 sun illuminations.
NASA Astrophysics Data System (ADS)
Paul, D.; Biswas, R.
2018-05-01
We report a highly sensitive Localized surface plasmon resonance (LSPR) based photonic crystal fiber (PCF) sensor by embedding an array of gold nanospheres into the first layer of air-holes of PCF. We present a comprehensive analysis on the basis of progressive variation of refractive indices of analytes as well as sizes of the nanospheres. In the proposed sensing scheme, refractive indices of the analytes have been changed from 1 to 1.41(RIU), accompanied by alteration of the sizes of nanospheres ranging 40-70 nm. The entire study has been executed in the context of different material based PCFs (viz. phosphate and crown) and the corresponding results have been analyzed and compared. We observe a declining trend in modal loss in each set of PCFs with increment of RI of the analyte. Lower loss has been observed in case of crown based PCF. The sensor shows highest sensitivity ∼27,000 nm/RIU for crown based PCF for nanosphere of 70 nm with average wavelength interrogation sensitivity ∼5333.53 nm/RIU. In case of phosphate based PCF, highest sensitivity is found to be ∼18,000 nm/RIU with an average interrogation sensitivity ∼4555.56 nm/RIU for 40 nm of Au nanosphere. Moreover, the additional sensing parameters have been observed to highlight the better design of the modelled LSPR based photonic crystal fiber sensor. As such, the resolution (R), limit of detection (LOD) and sensitivity (S) of the proposed sensor in each case (viz. phosphate and crown PCF) have been discussed by using wavelength interrogation technique. The proposed study provides a basis for detailed investigation of LSPR phenomenon for PCF utilizing noble metal nanospheres (AuNPs).
Economic evaluation of DNA ploidy analysis vs liquid-based cytology for cervical screening.
Nghiem, V T; Davies, K R; Beck, J R; Follen, M; MacAulay, C; Guillaud, M; Cantor, S B
2015-06-09
DNA ploidy analysis involves automated quantification of chromosomal aneuploidy, a potential marker of progression toward cervical carcinoma. We evaluated the cost-effectiveness of this method for cervical screening, comparing five ploidy strategies (using different numbers of aneuploid cells as cut points) with liquid-based Papanicolaou smear and no screening. A state-transition Markov model simulated the natural history of HPV infection and possible progression into cervical neoplasia in a cohort of 12-year-old females. The analysis evaluated cost in 2012 US$ and effectiveness in quality-adjusted life-years (QALYs) from a health-system perspective throughout a lifetime horizon in the US setting. We calculated incremental cost-effectiveness ratios (ICERs) to determine the best strategy. The robustness of optimal choices was examined in deterministic and probabilistic sensitivity analyses. In the base-case analysis, the ploidy 4 cell strategy was cost-effective, yielding an increase of 0.032 QALY and an ICER of $18 264/QALY compared to no screening. For most scenarios in the deterministic sensitivity analysis, the ploidy 4 cell strategy was the only cost-effective strategy. Cost-effectiveness acceptability curves showed that this strategy was more likely to be cost-effective than the Papanicolaou smear. Compared to the liquid-based Papanicolaou smear, screening with a DNA ploidy strategy appeared less costly and comparably effective.
Lee, Ga-Young; Kim, Jeonghun; Kim, Ju Han; Kim, Kiwoong; Seong, Joon-Kyung
2014-01-01
Mobile healthcare applications are becoming a growing trend. Also, the prevalence of dementia in modern society is showing a steady growing trend. Among degenerative brain diseases that cause dementia, Alzheimer disease (AD) is the most common. The purpose of this study was to identify AD patients using magnetic resonance imaging in the mobile environment. We propose an incremental classification for mobile healthcare systems. Our classification method is based on incremental learning for AD diagnosis and AD prediction using the cortical thickness data and hippocampus shape. We constructed a classifier based on principal component analysis and linear discriminant analysis. We performed initial learning and mobile subject classification. Initial learning is the group learning part in our server. Our smartphone agent implements the mobile classification and shows various results. With use of cortical thickness data analysis alone, the discrimination accuracy was 87.33% (sensitivity 96.49% and specificity 64.33%). When cortical thickness data and hippocampal shape were analyzed together, the achieved accuracy was 87.52% (sensitivity 96.79% and specificity 63.24%). In this paper, we presented a classification method based on online learning for AD diagnosis by employing both cortical thickness data and hippocampal shape analysis data. Our method was implemented on smartphone devices and discriminated AD patients for normal group.
Peiris, Ramila H; Ignagni, Nicholas; Budman, Hector; Moresoli, Christine; Legge, Raymond L
2012-09-15
Characterization of the interactions between natural colloidal/particulate- and protein-like matter is important for understanding their contribution to different physiochemical phenomena like membrane fouling, adsorption of bacteria onto surfaces and various applications of nanoparticles in nanomedicine and nanotoxicology. Precise interpretation of the extent of such interactions is however hindered due to the limitations of most characterization methods to allow rapid, sensitive and accurate measurements. Here we report on a fluorescence-based excitation-emission matrix (EEM) approach in combination with principal component analysis (PCA) to extract information related to the interaction between natural colloidal/particulate- and protein-like matter. Surface plasmon resonance (SPR) analysis and fiber-optic probe based surface fluorescence measurements were used to confirm that the proposed approach can be used to characterize colloidal/particulate-protein interactions at the physical level. This method has potential to be a fundamental measurement of these interactions with the advantage that it can be performed rapidly and with high sensitivity. Copyright © 2012 Elsevier B.V. All rights reserved.
Global sensitivity analysis of multiscale properties of porous materials
NASA Astrophysics Data System (ADS)
Um, Kimoon; Zhang, Xuan; Katsoulakis, Markos; Plechac, Petr; Tartakovsky, Daniel M.
2018-02-01
Ubiquitous uncertainty about pore geometry inevitably undermines the veracity of pore- and multi-scale simulations of transport phenomena in porous media. It raises two fundamental issues: sensitivity of effective material properties to pore-scale parameters and statistical parameterization of Darcy-scale models that accounts for pore-scale uncertainty. Homogenization-based maps of pore-scale parameters onto their Darcy-scale counterparts facilitate both sensitivity analysis (SA) and uncertainty quantification. We treat uncertain geometric characteristics of a hierarchical porous medium as random variables to conduct global SA and to derive probabilistic descriptors of effective diffusion coefficients and effective sorption rate. Our analysis is formulated in terms of solute transport diffusing through a fluid-filled pore space, while sorbing to the solid matrix. Yet it is sufficiently general to be applied to other multiscale porous media phenomena that are amenable to homogenization.
[Study on the automatic parameters identification of water pipe network model].
Jia, Hai-Feng; Zhao, Qi-Feng
2010-01-01
Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved.
NASA Astrophysics Data System (ADS)
Aravindran, S.; Sahilah, A. M.; Aminah, A.
2014-09-01
Halal surveillance on halal ingredients incorporated in surimi based products were studied using polymerase chain reaction (PCR)-southern hybridization on chip analysis. The primers used in this technique were targeted on mitochondria DNA (mtDNA) of cytochrome b (cyt b) gene sequence which able to differentiate 7 type (beef, chicken, duck, goat, buffalo, lamb and pork) of species on a single chip. 17 (n = 17*3) different brands of surimi-based product were purchased randomly from Selangor local market in January 2013. Of 17 brands, 3 (n = 3*3) brands were positive for chicken DNA, 1 (n = 1*3) brand was positive for goat DNA, and the remainder 13 brands (n = 13*3) have no DNA species detected. The sensitivity of PCR-southern hybridization primers to detect each meat species was 0.1 ng. In the present study, it is evidence that PCR-Southern Hybridization analysis offered a reliable result due to its highly specific and sensitive properties in detecting non-halal additive such as plasma protein incorporation in surimi-based product.
Virtual reality measures in neuropsychological assessment: a meta-analytic review.
Neguț, Alexandra; Matu, Silviu-Andrei; Sava, Florin Alin; David, Daniel
2016-02-01
Virtual reality-based assessment is a new paradigm for neuropsychological evaluation, that might provide an ecological assessment, compared to paper-and-pencil or computerized neuropsychological assessment. Previous research has focused on the use of virtual reality in neuropsychological assessment, but no meta-analysis focused on the sensitivity of virtual reality-based measures of cognitive processes in measuring cognitive processes in various populations. We found eighteen studies that compared the cognitive performance between clinical and healthy controls on virtual reality measures. Based on a random effects model, the results indicated a large effect size in favor of healthy controls (g = .95). For executive functions, memory and visuospatial analysis, subgroup analysis revealed moderate to large effect sizes, with superior performance in the case of healthy controls. Participants' mean age, type of clinical condition, type of exploration within virtual reality environments, and the presence of distractors were significant moderators. Our findings support the sensitivity of virtual reality-based measures in detecting cognitive impairment. They highlight the possibility of using virtual reality measures for neuropsychological assessment in research applications, as well as in clinical practice.
Sweetapple, Christine; Fu, Guangtao; Butler, David
2013-09-01
This study investigates sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment, through the use of local and global sensitivity analysis tools, and contributes to an in-depth understanding of wastewater treatment modelling by revealing critical parameters and parameter interactions. One-factor-at-a-time sensitivity analysis is used to screen model parameters and identify those with significant individual effects on three performance indicators: total greenhouse gas emissions, effluent quality and operational cost. Sobol's method enables identification of parameters with significant higher order effects and of particular parameter pairs to which model outputs are sensitive. Use of a variance-based global sensitivity analysis tool to investigate parameter interactions enables identification of important parameters not revealed in one-factor-at-a-time sensitivity analysis. These interaction effects have not been considered in previous studies and thus provide a better understanding wastewater treatment plant model characterisation. It was found that uncertainty in modelled nitrous oxide emissions is the primary contributor to uncertainty in total greenhouse gas emissions, due largely to the interaction effects of three nitrogen conversion modelling parameters. The higher order effects of these parameters are also shown to be a key source of uncertainty in effluent quality. Copyright © 2013 Elsevier Ltd. All rights reserved.
Zhu, Shan; Pang, Fufei; Huang, Sujuan; Zou, Fang; Dong, Yanhua; Wang, Tingyun
2015-06-01
Atomic layer deposition (ALD) technology is introduced to fabricate a high sensitivity refractive index sensor based on an adiabatic tapered optical fiber. Different thickness of Al2O3 nanofilm is coated around fiber taper precisely and uniformly under different deposition cycles. Attributed to the high refractive index of the Al2O3 nanofilm, an asymmetry Fabry-Perot like interferometer is constructed along the fiber taper. Based on the ray-optic analysis, total internal reflection happens on the nanofilm-surrounding interface. With the ambient refractive index changing, the phase delay induced by the Goos-Hänchen shift is changed. Correspondingly, the transmission resonant spectrum shifts, which can be utilized for realizing high sensitivity sensor. The high sensitivity sensor with 6008 nm/RIU is demonstrated by depositing 3000 layers Al2O3 nanofilm as the ambient refractive index is close to 1.33. This high sensitivity refractive index sensor is expected to have wide applications in biochemical sensors.
Fragment-based prediction of skin sensitization using recursive partitioning
NASA Astrophysics Data System (ADS)
Lu, Jing; Zheng, Mingyue; Wang, Yong; Shen, Qiancheng; Luo, Xiaomin; Jiang, Hualiang; Chen, Kaixian
2011-09-01
Skin sensitization is an important toxic endpoint in the risk assessment of chemicals. In this paper, structure-activity relationships analysis was performed on the skin sensitization potential of 357 compounds with local lymph node assay data. Structural fragments were extracted by GASTON (GrAph/Sequence/Tree extractiON) from the training set. Eight fragments with accuracy significantly higher than 0.73 ( p < 0.1) were retained to make up an indicator descriptor fragment. The fragment descriptor and eight other physicochemical descriptors closely related to the endpoint were calculated to construct the recursive partitioning tree (RP tree) for classification. The balanced accuracy of the training set, test set I, and test set II in the leave-one-out model were 0.846, 0.800, and 0.809, respectively. The results highlight that fragment-based RP tree is a preferable method for identifying skin sensitizers. Moreover, the selected fragments provide useful structural information for exploring sensitization mechanisms, and RP tree creates a graphic tree to identify the most important properties associated with skin sensitization. They can provide some guidance for designing of drugs with lower sensitization level.
Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.
2016-01-01
A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.
Chapinal, Núria; Schumaker, Brant A; Joly, Damien O; Elkin, Brett T; Stephen, Craig
2015-07-01
We estimated the sensitivity and specificity of the caudal-fold skin test (CFT), the fluorescent polarization assay (FPA), and the rapid lateral-flow test (RT) for the detection of Mycobacterium bovis in free-ranging wild wood bison (Bison bison athabascae), in the absence of a gold standard, by using Bayesian analysis, and then used those estimates to forecast the performance of a pairwise combination of tests in parallel. In 1998-99, 212 wood bison from Wood Buffalo National Park (Canada) were tested for M. bovis infection using CFT and two serologic tests (FPA and RT). The sensitivity and specificity of each test were estimated using a three-test, one-population, Bayesian model allowing for conditional dependence between FPA and RT. The sensitivity and specificity of the combination of CFT and each serologic test in parallel were calculated assuming conditional independence. The test performance estimates were influenced by the prior values chosen. However, the rank of tests and combinations of tests based on those estimates remained constant. The CFT was the most sensitive test and the FPA was the least sensitive, whereas RT was the most specific test and CFT was the least specific. In conclusion, given the fact that gold standards for the detection of M. bovis are imperfect and difficult to obtain in the field, Bayesian analysis holds promise as a tool to rank tests and combinations of tests based on their performance. Combining a skin test with an animal-side serologic test, such as RT, increases sensitivity in the detection of M. bovis and is a good approach to enhance disease eradication or control in wild bison.
Cifuentes, Ricardo A; Murillo-Rojas, Juan; Avella-Vargas, Esperanza
2016-03-03
In the search to prevent hemorrhages associated with anticoagulant therapy, a major goal is to validate predictors of sensitivity to warfarin. However, previous studies in Colombia that included polymorphisms in the VKORC1 and CYP2C9 genes as predictors reported different algorithm performances to explain dose variations, and did not evaluate the prediction of sensitivity to warfarin. To determine the accuracy of the pharmacogenetic analysis, which includes the CYP2C9 *2 and *3 and VKORC1 1639G>A polymorphisms in predicting patients' sensitivity to warfarin at the Hospital Militar Central, a reference center for patients born in different parts of Colombia. Demographic and clinical data were obtained from 130 patients with stable doses of warfarin for more than two months. Next, their genotypes were obtained through a melting curve analysis. After verifying the Hardy-Weinberg equilibrium of the genotypes from the polymorphisms, a statistical analysis was done, which included multivariate and predictive approaches. A pharmacogenetic model that explained 52.8% of dose variation (p<0.001) was built, which was only 4% above the performance resulting from the same data using the International Warfarin Pharmacogenetics Consortium algorithm. The model predicting the sensitivity achieved an accuracy of 77.8% and included age (p=0.003), polymorphisms *2 and *3 (p=0.002) and polymorphism 1639G>A (p<0.001) as predictors. These results in a mixed population support the prediction of sensitivity to warfarin based on polymorphisms in VKORC1 and CYP2C9 as a valid approach in Colombian patients.
Sensitivity Analysis of the Integrated Medical Model for ISS Programs
NASA Technical Reports Server (NTRS)
Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.
2016-01-01
Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral part of the overall verification, validation, and credibility review of IMM v4.0.
Zou, Ling; Guo, Qian; Xu, Yi; Yang, Biao; Jiao, Zhuqing; Xiang, Jianbo
2016-04-29
Functional magnetic resonance imaging (fMRI) is an important tool in neuroscience for assessing connectivity and interactions between distant areas of the brain. To find and characterize the coherent patterns of brain activity as a means of identifying brain systems for the cognitive reappraisal of the emotion task, both density-based k-means clustering and independent component analysis (ICA) methods can be applied to characterize the interactions between brain regions involved in cognitive reappraisal of emotion. Our results reveal that compared with the ICA method, the density-based k-means clustering method provides a higher sensitivity of polymerization. In addition, it is more sensitive to those relatively weak functional connection regions. Thus, the study concludes that in the process of receiving emotional stimuli, the relatively obvious activation areas are mainly distributed in the frontal lobe, cingulum and near the hypothalamus. Furthermore, density-based k-means clustering method creates a more reliable method for follow-up studies of brain functional connectivity.
Small-volume cavity cell using hollow optical fiber for Raman scattering-based gas detection
NASA Astrophysics Data System (ADS)
Okita, Y.; Katagiri, T.; Matsuura, Y.
2011-03-01
The highly sensitive Raman cell based on the hollow optical fiber that is suitable for the real-time breath analysis is reported. Hollow optical fiber with inner coating of silver is used as a gas cell and a Stokes light collector. A very small cell whose volume is only 0.4 ml or less enables fast response and real-time measurement of trace gases. To increase the sensitivity the cell is arranged in a cavity which includes of a long-pass filter and a high reflective mirror. The sensitivity of the cavity cell is more than two times higher than that of the cell without cavity.
Bashir, Mustafa R; Merkle, Elmar M; Smith, Alastair D; Boll, Daniel T
2012-02-01
To assess whether in vivo dual-ratio Dixon discrimination can improve detection of diffuse liver disease, specifically steatosis, iron deposition and combined disease over traditional single-ratio in/opposed phase analysis. Seventy-one patients with biopsy-proven (17.7 ± 17.0 days) hepatic steatosis (n = 16), iron deposition (n = 11), combined deposition (n = 3) and neither disease (n = 41) underwent MR examinations. Dual-echo in/opposed-phase MR with Dixon water/fat reconstructions were acquired. Analysis consisted of: (a) single-ratio hepatic region-of-interest (ROI)-based assessment of in/opposed ratios; (b) dual-ratio hepatic ROI assessment of in/opposed and fat/water ratios; (c) computer-aided dual-ratio assessment evaluating all hepatic voxels. Disease-specific thresholds were determined; statistical analyses assessed disease-dependent voxel ratios, based on single-ratio (a) and dual-ratio (b and c) techniques. Single-ratio discrimination succeeded in identifying iron deposition (I/O(Ironthreshold)<0.88) and steatosis (I/O(Fatthreshold>1.15)) from normal parenchyma, sensitivity 70.0%; it failed to detect combined disease. Dual-ratio discrimination succeeded in identifying abnormal hepatic parenchyma (F/W(Normalthreshold)>0.05), sensitivity 96.7%; logarithmic functions for iron deposition (I/O(Irondiscriminator)
NASA Technical Reports Server (NTRS)
1971-01-01
The findings, conclusions, and recommendations relative to the investigations conducted to evaluate tests for classifying pyrotechnic materials and end items as to their hazard potential are presented. Information required to establish an applicable means of determining the potential hazards of pyrotechnics is described. Hazard evaluations are based on the peak overpressure or impulse resulting from the explosion as a function of distance from the source. Other hazard classification tests include dust ignition sensitivity, impact ignition sensitivity, spark ignition sensitivity, and differential thermal analysis.
Ion-Sensitive Field-Effect Transistor for Biological Sensing
Lee, Chang-Soo; Kim, Sang Kyu; Kim, Moonil
2009-01-01
In recent years there has been great progress in applying FET-type biosensors for highly sensitive biological detection. Among them, the ISFET (ion-sensitive field-effect transistor) is one of the most intriguing approaches in electrical biosensing technology. Here, we review some of the main advances in this field over the past few years, explore its application prospects, and discuss the main issues, approaches, and challenges, with the aim of stimulating a broader interest in developing ISFET-based biosensors and extending their applications for reliable and sensitive analysis of various biomolecules such as DNA, proteins, enzymes, and cells. PMID:22423205
On the sensitivity of complex, internally coupled systems
NASA Technical Reports Server (NTRS)
Sobieszczanskisobieski, Jaroslaw
1988-01-01
A method is presented for computing sensitivity derivatives with respect to independent (input) variables for complex, internally coupled systems, while avoiding the cost and inaccuracy of finite differencing performed on the entire system analysis. The method entails two alternative algorithms: the first is based on the classical implicit function theorem formulated on residuals of governing equations, and the second develops the system sensitivity equations in a new form using the partial (local) sensitivity derivatives of the output with respect to the input of each part of the system. A few application examples are presented to illustrate the discussion.
Field Evaluation of the Pedostructure-Based Model (Kamel®)
USDA-ARS?s Scientific Manuscript database
This study involves a field evaluation of the pedostructure-based model Kamel and comparisons between Kamel and the Hydrus-1D model for predicting profile soil moisture. This paper also presents a sensitivity analysis of Kamel with an evaluation field site used as the base scenario. The field site u...
NASA Astrophysics Data System (ADS)
Elshafei, Y.; Tonts, M.; Sivapalan, M.; Hipsey, M. R.
2016-06-01
It is increasingly acknowledged that effective management of water resources requires a holistic understanding of the coevolving dynamics inherent in the coupled human-hydrology system. One of the fundamental information gaps concerns the sensitivity of coupled system feedbacks to various endogenous system properties and exogenous societal contexts. This paper takes a previously calibrated sociohydrology model and applies an idealized implementation, in order to: (i) explore the sensitivity of emergent dynamics resulting from bidirectional feedbacks to assumptions regarding (a) internal system properties that control the internal dynamics of the coupled system and (b) the external sociopolitical context; and (ii) interpret the results within the context of water resource management decision making. The analysis investigates feedback behavior in three ways, (a) via a global sensitivity analysis on key parameters and assessment of relevant model outputs, (b) through a comparative analysis based on hypothetical placement of the catchment along various points on the international sociopolitical gradient, and (c) by assessing the effects of various direct management intervention scenarios. Results indicate the presence of optimum windows that might offer the greatest positive impact per unit of management effort. Results further advocate management tools that encourage an adaptive learning, community-based approach with respect to water management, which are found to enhance centralized policy measures. This paper demonstrates that it is possible to use a place-based sociohydrology model to make abstractions as to the dynamics of bidirectional feedback behavior, and provide insights as to the efficacy of water management tools under different circumstances.
Water Vapor Sensitivity Analysis for AVIRIS Radiactive-Transfer-Model-Based Reflectance Inversion
NASA Technical Reports Server (NTRS)
Green, R.
2000-01-01
As with other imaging spectrometers, AVIRIS measures the upwelling specral radiance incident at the sensor. Most research and applications objectives for AVIRIS are based on the molecular absorption and scattering features expressed in the surface reflectance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chinthavali, Madhu Sudhan; Wang, Zhiqiang
This paper presents a detailed parametric sensitivity analysis for a wireless power transfer (WPT) system in electric vehicle application. Specifically, several key parameters for sensitivity analysis of a series-parallel (SP) WPT system are derived first based on analytical modeling approach, which includes the equivalent input impedance, active / reactive power, and DC voltage gain. Based on the derivation, the impact of primary side compensation capacitance, coupling coefficient, transformer leakage inductance, and different load conditions on the DC voltage gain curve and power curve are studied and analyzed. It is shown that the desired power can be achieved by just changingmore » frequency or voltage depending on the design value of coupling coefficient. However, in some cases both have to be modified in order to achieve the required power transfer.« less
Spectral negentropy based sidebands and demodulation analysis for planet bearing fault diagnosis
NASA Astrophysics Data System (ADS)
Feng, Zhipeng; Ma, Haoqun; Zuo, Ming J.
2017-12-01
Planet bearing vibration signals are highly complex due to intricate kinematics (involving both revolution and spinning) and strong multiple modulations (including not only the fault induced amplitude modulation and frequency modulation, but also additional amplitude modulations due to load zone passing, time-varying vibration transfer path, and time-varying angle between the gear pair mesh lines of action and fault impact force vector), leading to difficulty in fault feature extraction. Rolling element bearing fault diagnosis essentially relies on detection of fault induced repetitive impulses carried by resonance vibration, but they are usually contaminated by noise and therefor are hard to be detected. This further adds complexity to planet bearing diagnostics. Spectral negentropy is able to reveal the frequency distribution of repetitive transients, thus providing an approach to identify the optimal frequency band of a filter for separating repetitive impulses. In this paper, we find the informative frequency band (including the center frequency and bandwidth) of bearing fault induced repetitive impulses using the spectral negentropy based infogram. In Fourier spectrum, we identify planet bearing faults according to sideband characteristics around the center frequency. For demodulation analysis, we filter out the sensitive component based on the informative frequency band revealed by the infogram. In amplitude demodulated spectrum (squared envelope spectrum) of the sensitive component, we diagnose planet bearing faults by matching the present peaks with the theoretical fault characteristic frequencies. We further decompose the sensitive component into mono-component intrinsic mode functions (IMFs) to estimate their instantaneous frequencies, and select a sensitive IMF with an instantaneous frequency fluctuating around the center frequency for frequency demodulation analysis. In the frequency demodulated spectrum (Fourier spectrum of instantaneous frequency) of selected IMF, we discern planet bearing fault reasons according to the present peaks. The proposed spectral negentropy infogram based spectrum and demodulation analysis method is illustrated via a numerical simulated signal analysis. Considering the unique load bearing feature of planet bearings, experimental validations under both no-load and loading conditions are done to verify the derived fault symptoms and the proposed method. The localized faults on outer race, rolling element and inner race are successfully diagnosed.
Dai, Heng; Ye, Ming; Walker, Anthony P.; ...
2017-03-28
A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
NASA Astrophysics Data System (ADS)
Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue
2018-06-01
Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
Posada-Quintero, Hugo F; Florian, John P; Orjuela-Cañón, Álvaro D; Chon, Ki H
2016-09-01
Time-domain indices of electrodermal activity (EDA) have been used as a marker of sympathetic tone. However, they often show high variation between subjects and low consistency, which has precluded their general use as a marker of sympathetic tone. To examine whether power spectral density analysis of EDA can provide more consistent results, we recently performed a variety of sympathetic tone-evoking experiments (43). We found significant increase in the spectral power in the frequency range of 0.045 to 0.25 Hz when sympathetic tone-evoking stimuli were induced. The sympathetic tone assessed by the power spectral density of EDA was found to have lower variation and more sensitivity for certain, but not all, stimuli compared with the time-domain analysis of EDA. We surmise that this lack of sensitivity in certain sympathetic tone-inducing conditions with time-invariant spectral analysis of EDA may lie in its inability to characterize time-varying dynamics of the sympathetic tone. To overcome the disadvantages of time-domain and time-invariant power spectral indices of EDA, we developed a highly sensitive index of sympathetic tone, based on time-frequency analysis of EDA signals. Its efficacy was tested using experiments designed to elicit sympathetic dynamics. Twelve subjects underwent four tests known to elicit sympathetic tone arousal: cold pressor, tilt table, stand test, and the Stroop task. We hypothesize that a more sensitive measure of sympathetic control can be developed using time-varying spectral analysis. Variable frequency complex demodulation, a recently developed technique for time-frequency analysis, was used to obtain spectral amplitudes associated with EDA. We found that the time-varying spectral frequency band 0.08-0.24 Hz was most responsive to stimulation. Spectral power for frequencies higher than 0.24 Hz were determined to be not related to the sympathetic dynamics because they comprised less than 5% of the total power. The mean value of time-varying spectral amplitudes in the frequency band 0.08-0.24 Hz were used as the index of sympathetic tone, termed TVSymp. TVSymp was found to be overall the most sensitive to the stimuli, as evidenced by a low coefficient of variation (0.54), and higher consistency (intra-class correlation, 0.96) and sensitivity (Youden's index > 0.75), area under the receiver operating characteristic (ROC) curve (>0.8, accuracy > 0.88) compared with time-domain and time-invariant spectral indices, including heart rate variability. Copyright © 2016 the American Physiological Society.
Barron, Daniel S; Fox, Peter T; Pardoe, Heath; Lancaster, Jack; Price, Larry R; Blackmon, Karen; Berry, Kristen; Cavazos, Jose E; Kuzniecky, Ruben; Devinsky, Orrin; Thesen, Thomas
2015-01-01
Noninvasive markers of brain function could yield biomarkers in many neurological disorders. Disease models constrained by coordinate-based meta-analysis are likely to increase this yield. Here, we evaluate a thalamic model of temporal lobe epilepsy that we proposed in a coordinate-based meta-analysis and extended in a diffusion tractography study of an independent patient population. Specifically, we evaluated whether thalamic functional connectivity (resting-state fMRI-BOLD) with temporal lobe areas can predict seizure onset laterality, as established with intracranial EEG. Twenty-four lesional and non-lesional temporal lobe epilepsy patients were studied. No significant differences in functional connection strength in patient and control groups were observed with Mann-Whitney Tests (corrected for multiple comparisons). Notwithstanding the lack of group differences, individual patient difference scores (from control mean connection strength) successfully predicted seizure onset zone as shown in ROC curves: discriminant analysis (two-dimensional) predicted seizure onset zone with 85% sensitivity and 91% specificity; logistic regression (four-dimensional) achieved 86% sensitivity and 100% specificity. The strongest markers in both analyses were left thalamo-hippocampal and right thalamo-entorhinal cortex functional connection strength. Thus, this study shows that thalamic functional connections are sensitive and specific markers of seizure onset laterality in individual temporal lobe epilepsy patients. This study also advances an overall strategy for the programmatic development of neuroimaging biomarkers in clinical and genetic populations: a disease model informed by coordinate-based meta-analysis was used to anatomically constrain individual patient analyses.
Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.
2014-01-01
Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544
The Role of Attention in Somatosensory Processing: A Multi-trait, Multi-method Analysis
Puts, Nicolaas A. J.; Mahone, E. Mark; Edden, Richard A. E.; Tommerdahl, Mark; Mostofsky, Stewart H.
2016-01-01
Sensory processing abnormalities in autism have largely been described by parent report. This study used a multi-method (parent-report and measurement), multi-trait (tactile sensitivity and attention) design to evaluate somatosensory processing in ASD. Results showed multiple significant within-method (e.g., parent report of different traits)/cross-trait (e.g., attention and tactile sensitivity) correlations, suggesting that parent-reported tactile sensory dysfunction and performance-based tactile sensitivity describe different behavioral phenomena. Additionally, both parent-reported tactile functioning and performance-based tactile sensitivity measures were significantly associated with measures of attention. Findings suggest that sensory (tactile) processing abnormalities in ASD are multifaceted, and may partially reflect a more global deficit in behavioral regulation (including attention). Challenges of relying solely on parent-report to describe sensory difficulties faced by children/families with ASD are also highlighted. PMID:27448580
Fines classification based on sensitivity to pore-fluid chemistry
Jang, Junbong; Santamarina, J. Carlos
2016-01-01
The 75-μm particle size is used to discriminate between fine and coarse grains. Further analysis of fine grains is typically based on the plasticity chart. Whereas pore-fluid-chemistry-dependent soil response is a salient and distinguishing characteristic of fine grains, pore-fluid chemistry is not addressed in current classification systems. Liquid limits obtained with electrically contrasting pore fluids (deionized water, 2-M NaCl brine, and kerosene) are combined to define the soil “electrical sensitivity.” Liquid limit and electrical sensitivity can be effectively used to classify fine grains according to their fluid-soil response into no-, low-, intermediate-, or high-plasticity fine grains of low, intermediate, or high electrical sensitivity. The proposed methodology benefits from the accumulated experience with liquid limit in the field and addresses the needs of a broader range of geotechnical engineering problems.
Functionalized Gold Nanoparticles for the Detection of C-Reactive Protein
António, Maria
2018-01-01
C-reactive protein (CRP) is a very important biomarker of infection and inflammation for a number of diseases. Routine CRP measurements with high sensitivity and reliability are highly relevant to the assessment of states of inflammation and the efficacy of treatment intervention, and require the development of very sensitive, selective, fast, robust and reproducible assays. Gold nanoparticles (Au NPs) are distinguished for their unique electrical and optical properties and the ability to conjugate with biomolecules. Au NP-based probes have attracted considerable attention in the last decade in the analysis of biological samples due to their simplicity, high sensitivity and selectivity. Thus, this article aims to be a critical and constructive analysis of the literature of the last three years regarding the advances made in the development of bioanalytical assays based on gold nanoparticles for the in vitro detection and quantification of C-reactive protein from biological samples. Current methods for Au NP synthesis and the strategies for surface modification aiming at selectivity towards CRP are highlighted. PMID:29597295
Automated diagnosis of Alzheimer's disease with multi-atlas based whole brain segmentations
NASA Astrophysics Data System (ADS)
Luo, Yuan; Tang, Xiaoying
2017-03-01
Voxel-based analysis is widely used in quantitative analysis of structural brain magnetic resonance imaging (MRI) and automated disease detection, such as Alzheimer's disease (AD). However, noise at the voxel level may cause low sensitivity to AD-induced structural abnormalities. This can be addressed with the use of a whole brain structural segmentation approach which greatly reduces the dimension of features (the number of voxels). In this paper, we propose an automatic AD diagnosis system that combines such whole brain segmen- tations with advanced machine learning methods. We used a multi-atlas segmentation technique to parcellate T1-weighted images into 54 distinct brain regions and extract their structural volumes to serve as the features for principal-component-analysis-based dimension reduction and support-vector-machine-based classification. The relationship between the number of retained principal components (PCs) and the diagnosis accuracy was systematically evaluated, in a leave-one-out fashion, based on 28 AD subjects and 23 age-matched healthy subjects. Our approach yielded pretty good classification results with 96.08% overall accuracy being achieved using the three foremost PCs. In addition, our approach yielded 96.43% specificity, 100% sensitivity, and 0.9891 area under the receiver operating characteristic curve.
Robust motion tracking based on adaptive speckle decorrelation analysis of OCT signal.
Wang, Yuewen; Wang, Yahui; Akansu, Ali; Belfield, Kevin D; Hubbi, Basil; Liu, Xuan
2015-11-01
Speckle decorrelation analysis of optical coherence tomography (OCT) signal has been used in motion tracking. In our previous study, we demonstrated that cross-correlation coefficient (XCC) between Ascans had an explicit functional dependency on the magnitude of lateral displacement (δx). In this study, we evaluated the sensitivity of speckle motion tracking using the derivative of function XCC(δx) on variable δx. We demonstrated the magnitude of the derivative can be maximized. In other words, the sensitivity of OCT speckle tracking can be optimized by using signals with appropriate amount of decorrelation for XCC calculation. Based on this finding, we developed an adaptive speckle decorrelation analysis strategy to achieve motion tracking with optimized sensitivity. Briefly, we used subsequently acquired Ascans and Ascans obtained with larger time intervals to obtain multiple values of XCC and chose the XCC value that maximized motion tracking sensitivity for displacement calculation. Instantaneous motion speed can be calculated by dividing the obtained displacement with time interval between Ascans involved in XCC calculation. We implemented the above-described algorithm in real-time using graphic processing unit (GPU) and demonstrated its effectiveness in reconstructing distortion-free OCT images using data obtained from a manually scanned OCT probe. The adaptive speckle tracking method was validated in manually scanned OCT imaging, on phantom as well as in vivo skin tissue.
Robust motion tracking based on adaptive speckle decorrelation analysis of OCT signal
Wang, Yuewen; Wang, Yahui; Akansu, Ali; Belfield, Kevin D.; Hubbi, Basil; Liu, Xuan
2015-01-01
Speckle decorrelation analysis of optical coherence tomography (OCT) signal has been used in motion tracking. In our previous study, we demonstrated that cross-correlation coefficient (XCC) between Ascans had an explicit functional dependency on the magnitude of lateral displacement (δx). In this study, we evaluated the sensitivity of speckle motion tracking using the derivative of function XCC(δx) on variable δx. We demonstrated the magnitude of the derivative can be maximized. In other words, the sensitivity of OCT speckle tracking can be optimized by using signals with appropriate amount of decorrelation for XCC calculation. Based on this finding, we developed an adaptive speckle decorrelation analysis strategy to achieve motion tracking with optimized sensitivity. Briefly, we used subsequently acquired Ascans and Ascans obtained with larger time intervals to obtain multiple values of XCC and chose the XCC value that maximized motion tracking sensitivity for displacement calculation. Instantaneous motion speed can be calculated by dividing the obtained displacement with time interval between Ascans involved in XCC calculation. We implemented the above-described algorithm in real-time using graphic processing unit (GPU) and demonstrated its effectiveness in reconstructing distortion-free OCT images using data obtained from a manually scanned OCT probe. The adaptive speckle tracking method was validated in manually scanned OCT imaging, on phantom as well as in vivo skin tissue. PMID:26600996
A methodology to estimate uncertainty for emission projections through sensitivity analysis.
Lumbreras, Julio; de Andrés, Juan Manuel; Pérez, Javier; Borge, Rafael; de la Paz, David; Rodríguez, María Encarnación
2015-04-01
Air pollution abatement policies must be based on quantitative information on current and future emissions of pollutants. As emission projections uncertainties are inevitable and traditional statistical treatments of uncertainty are highly time/resources consuming, a simplified methodology for nonstatistical uncertainty estimation based on sensitivity analysis is presented in this work. The methodology was applied to the "with measures" scenario for Spain, concretely over the 12 highest emitting sectors regarding greenhouse gas and air pollutants emissions. Examples of methodology application for two important sectors (power plants, and agriculture and livestock) are shown and explained in depth. Uncertainty bands were obtained up to 2020 by modifying the driving factors of the 12 selected sectors and the methodology was tested against a recomputed emission trend in a low economic-growth perspective and official figures for 2010, showing a very good performance. A solid understanding and quantification of uncertainties related to atmospheric emission inventories and projections provide useful information for policy negotiations. However, as many of those uncertainties are irreducible, there is an interest on how they could be managed in order to derive robust policy conclusions. Taking this into account, a method developed to use sensitivity analysis as a source of information to derive nonstatistical uncertainty bands for emission projections is presented and applied to Spain. This method simplifies uncertainty assessment and allows other countries to take advantage of their sensitivity analyses.
Recent approaches in sensitive enantioseparations by CE.
Sánchez-Hernández, Laura; Castro-Puyana, María; Marina, María Luisa; Crego, Antonio L
2012-01-01
The latest strategies and instrumental improvements for enhancing the detection sensitivity in chiral analysis by CE are reviewed in this work. Following the previous reviews by García-Ruiz et al. (Electrophoresis 2006, 27, 195-212) and Sánchez-Hernández et al. (Electrophoresis 2008, 29, 237-251; Electrophoresis 2010, 31, 28-43), this review includes those papers that were published during the period from June 2009 to May 2011. These works describe the use of offline and online sample treatment techniques, online sample preconcentration techniques based on electrophoretic principles, and alternative detection systems to UV-Vis to increase the detection sensitivity. The application of the above-mentioned strategies, either alone or combined, to improve the sensitivity in the enantiomeric analysis of a broad range of samples, such as pharmaceutical, biological, food and environmental samples, enables to decrease the limits of detection up to 10⁻¹² M. The use of microchips to achieve sensitive chiral separations is also discussed. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Exhaled molecular profiles in the assessment of cystic fibrosis and primary ciliary dyskinesia.
Paff, T; van der Schee, M P; Daniels, J M A; Pals, G; Postmus, P E; Sterk, P J; Haarman, E G
2013-09-01
Early diagnosis and monitoring of disease activity are essential in cystic fibrosis (CF) and primary ciliary dyskinesia (PCD). We aimed to establish exhaled molecular profiles as the first step in assessing the potential of breath analysis. Exhaled breath was analyzed by electronic nose in 25 children with CF, 25 with PCD and 23 controls. Principle component reduction and canonical discriminant analysis were used to construct internally cross-validated ROC curves. CF and PCD patients had significantly different breath profiles when compared to healthy controls (CF: sensitivity 84%, specificity 65%; PCD: sensitivity 88%, specificity 52%) and from each other (sensitivity 84%, specificity 60%). Patients with and without exacerbations had significantly different breath profiles (CF: sensitivity 89%, specificity 56%; PCD: sensitivity 100%, specificity 90%). Exhaled molecular profiles significantly differ between patients with CF, PCD and controls. The eNose may have potential in disease monitoring based on the influence of exacerbations on the VOC-profile. Copyright © 2012 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.
Cha, Eunju; Kim, Sohee; Kim, Ho Jun; Lee, Kang Mi; Kim, Ki Hun; Kwon, Oh-Seung; Lee, Jaeick
2015-01-01
This study compared the sensitivity of various separation and ionization methods, including gas chromatography with an electron ionization source (GC-EI), liquid chromatography with an electrospray ionization source (LC-ESI), and liquid chromatography with a silver ion coordination ion spray source (LC-Ag(+) CIS), coupled to a mass spectrometer (MS) for steroid analysis. Chromatographic conditions, mass spectrometric transitions, and ion source parameters were optimized. The majority of steroids in GC-EI/MS/MS and LC-Ag(+) CIS/MS/MS analysis showed higher sensitivities than those obtained with other analytical methods. The limits of detection (LODs) of 65 steroids by GC-EI/MS/MS, 68 steroids by LC-Ag(+) CIS/MS/MS, 56 steroids by GC-EI/MS, 54 steroids by LC-ESI/MS/MS, and 27 steroids by GC-ESI/MS/MS were below cut-off value of 2.0 ng/mL. LODs of steroids that formed protonated ions in LC-ESI/MS/MS analysis were all lower than the cut-off value. Several steroids such as unconjugated C3-hydroxyl with C17-hydroxyl structure showed higher sensitivities in GC-EI/MS/MS analysis relative to those obtained using the LC-based methods. The steroids containing 4, 9, 11-triene structures showed relatively poor sensitivities in GC-EI/MS and GC-ESI/MS/MS analysis. The results of this study provide information that may be useful for selecting suitable analytical methods for confirmatory analysis of steroids. Copyright © 2015 John Wiley & Sons, Ltd.
Quantitative relationship between the local lymph node assay and human skin sensitization assays.
Schneider, K; Akkan, Z
2004-06-01
The local lymph node assay (LLNA) is a new test method which allows for the quantitative assessment of sensitizing potency in the mouse. Here, we investigate the quantitative correlation between results from the LLNA and two human sensitization tests--specifically, human repeat insult patch tests (HRIPTs) and human maximization tests (HMTs). Data for 57 substances were evaluated, of which 46 showed skin sensitizing properties in human tests, whereas 11 yielded negative results in humans. For better comparability data from mouse and human tests were transformed to applied doses per skin area, which ranged over four orders of magnitude for the substances considered. Regression analysis for the 46 human sensitizing substances revealed a significant positive correlation between the LLNA and human tests. The correlation was better between LLNA and HRIPT data (n=23; r=0.77) than between LLNA and HMT data (n=38; r=0.65). The observed scattering of data points is related to various uncertainties, in part associated with insufficiencies of data from older HMT studies. Predominantly negative results in the LLNA for another 11 substances which showed no skin sensitizing activity in human maximization tests further corroborate the correspondence between LLNA and human tests. Based on this analysis, the LLNA can be considered a reliable basis for relative potency assessments for skin sensitizers. Proposals are made for the regulatory exploitation of the LLNA: four potency groups can be established, and assignment of substances to these groups according to the outcome of the LLNA can be used to characterize skin sensitizing potency in substance-specific assessments. Moreover, based on these potency groups, a more adequate consideration of sensitizing substances in preparations becomes possible. It is proposed to replace the current single concentration limit for skin sensitizers in preparations, which leads to an all or nothing classification of a preparation as sensitizing to skin ("R43") in the European Union, by differentiated concentration limits derived from the limits for the four potency groups.
Milton, Alyssa C; Ellis, Louise A; Davenport, Tracey A; Burns, Jane M; Hickie, Ian B
2017-09-26
Web-based self-report surveying has increased in popularity, as it can rapidly yield large samples at a low cost. Despite this increase in popularity, in the area of youth mental health, there is a distinct lack of research comparing the results of Web-based self-report surveys with the more traditional and widely accepted computer-assisted telephone interviewing (CATI). The Second Australian Young and Well National Survey 2014 sought to compare differences in respondent response patterns using matched items on CATI versus a Web-based self-report survey. The aim of this study was to examine whether responses varied as a result of item sensitivity, that is, the item's susceptibility to exaggeration on underreporting and to assess whether certain subgroups demonstrated this effect to a greater extent. A subsample of young people aged 16 to 25 years (N=101), recruited through the Second Australian Young and Well National Survey 2014, completed the identical items on two occasions: via CATI and via Web-based self-report survey. Respondents also rated perceived item sensitivity. When comparing CATI with the Web-based self-report survey, a Wilcoxon signed-rank analysis showed that respondents answered 14 of the 42 matched items in a significantly different way. Significant variation in responses (CATI vs Web-based) was more frequent if the item was also rated by the respondents as highly sensitive in nature. Specifically, 63% (5/8) of the high sensitivity items, 43% (3/7) of the neutral sensitivity items, and 0% (0/4) of the low sensitivity items were answered in a significantly different manner by respondents when comparing their matched CATI and Web-based question responses. The items that were perceived as highly sensitive by respondents and demonstrated response variability included the following: sexting activities, body image concerns, experience of diagnosis, and suicidal ideation. For high sensitivity items, a regression analysis showed respondents who were male (beta=-.19, P=.048) or who were not in employment, education, or training (NEET; beta=-.32, P=.001) were significantly more likely to provide different responses on matched items when responding in the CATI as compared with the Web-based self-report survey. The Web-based self-report survey, however, demonstrated some evidence of avidity and attrition bias. Compared with CATI, Web-based self-report surveys are highly cost-effective and had higher rates of self-disclosure on sensitive items, particularly for respondents who identify as male and NEET. A drawback to Web-based surveying methodologies, however, includes the limited control over avidity bias and the greater incidence of attrition bias. These findings have important implications for further development of survey methods in the area of health and well-being, especially when considering research topics (in this case diagnosis, suicidal ideation, sexting, and body image) and groups that are being recruited (young people, males, and NEET). ©Alyssa C Milton, Louise A Ellis, Tracey A Davenport, Jane M Burns, Ian B Hickie. Originally published in JMIR Mental Health (http://mental.jmir.org), 26.09.2017.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Haitao, E-mail: liaoht@cae.ac.cn
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less
Lu, Hongzhi; Quan, Shuai; Xu, Shoufang
2017-11-08
In this work, we developed a simple and sensitive ratiometric fluorescent assay for sensing trinitrotoluene (TNT) based on the inner filter effect (IFE) between gold nanoparticles (AuNPs) and ratiometric fluorescent nanoparticles (RFNs), which was designed by hybridizing green emissive carbon dots (CDs) and red emissive quantum dots (QDs) into a silica sphere as a fluorophore pair. AuNPs in their dispersion state can be a powerful absorber to quench CDs, while the aggregated AuNPs can quench QDs in the IFE-based fluorescent assays as a result of complementary overlap between the absorption spectrum of AuNPs and emission spectrum of RFNs. As a result of the fact that TNT can induce the aggregation of AuNPs, with the addition of TNT, the fluorescent of QDs can be quenched, while the fluorescent of CDs would be recovered. Then, ratiometric fluorescent detection of TNT is feasible. The present IFE-based ratiometric fluorescent sensor can detect TNT ranging from 0.1 to 270 nM, with a detection limit of 0.029 nM. In addition, the developed method was successfully applied to investigate TNT in water and soil samples with satisfactory recoveries ranging from 95 to 103%, with precision below 4.5%. The simple sensing approach proposed here could improve the sensitivity of colorimetric analysis by changing the ultraviolet analysis to ratiometric fluorescent analysis and promote the development of a dual-mode detection system.
Comparison of Fixed-Item and Response-Sensitive Versions of an Online Tutorial
ERIC Educational Resources Information Center
Grant, Lyle K.; Courtoreille, Marni
2007-01-01
This study is a comparison of 2 versions of an Internet-based tutorial that teaches the behavior-analysis concept of positive reinforcement. A fixed-item group of students studied a version of the tutorial that included 14 interactive examples and nonexamples of the concept. A response-sensitive group of students studied a different version of the…
A Survey of Shape Parameterization Techniques
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1999-01-01
This paper provides a survey of shape parameterization techniques for multidisciplinary optimization and highlights some emerging ideas. The survey focuses on the suitability of available techniques for complex configurations, with suitability criteria based on the efficiency, effectiveness, ease of implementation, and availability of analytical sensitivities for geometry and grids. The paper also contains a section on field grid regeneration, grid deformation, and sensitivity analysis techniques.
The enhanced cyan fluorescent protein: a sensitive pH sensor for fluorescence lifetime imaging.
Poëa-Guyon, Sandrine; Pasquier, Hélène; Mérola, Fabienne; Morel, Nicolas; Erard, Marie
2013-05-01
pH is an important parameter that affects many functions of live cells, from protein structure or function to several crucial steps of their metabolism. Genetically encoded pH sensors based on pH-sensitive fluorescent proteins have been developed and used to monitor the pH of intracellular compartments. The quantitative analysis of pH variations can be performed either by ratiometric or fluorescence lifetime detection. However, most available genetically encoded pH sensors are based on green and yellow fluorescent proteins and are not compatible with multicolor approaches. Taking advantage of the strong pH sensitivity of enhanced cyan fluorescent protein (ECFP), we demonstrate here its suitability as a sensitive pH sensor using fluorescence lifetime imaging. The intracellular ECFP lifetime undergoes large changes (32 %) in the pH 5 to pH 7 range, which allows accurate pH measurements to better than 0.2 pH units. By fusion of ECFP with the granular chromogranin A, we successfully measured the pH in secretory granules of PC12 cells, and we performed a kinetic analysis of intragranular pH variations in living cells exposed to ammonium chloride.
Space shuttle orbiter digital data processing system timing sensitivity analysis OFT ascent phase
NASA Technical Reports Server (NTRS)
Lagas, J. J.; Peterka, J. J.; Becker, D. A.
1977-01-01
Dynamic loads were investigated to provide simulation and analysis of the space shuttle orbiter digital data processing system (DDPS). Segments of the ascent test (OFT) configuration were modeled utilizing the information management system interpretive model (IMSIM) in a computerized simulation modeling of the OFT hardware and software workload. System requirements for simulation of the OFT configuration were defined, and sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and these sensitivity analyses, a test design was developed for adapting, parameterizing, and executing IMSIM, using varying load and stress conditions for model execution. Analyses of the computer simulation runs are documented, including results, conclusions, and recommendations for DDPS improvements.
Li, Liang; Wang, Yiying; Xu, Jiting; Flora, Joseph R V; Hoque, Shamia; Berge, Nicole D
2018-08-01
Hydrothermal carbonization (HTC) is a wet, low temperature thermal conversion process that continues to gain attention for the generation of hydrochar. The importance of specific process conditions and feedstock properties on hydrochar characteristics is not well understood. To evaluate this, linear and non-linear models were developed to describe hydrochar characteristics based on data collected from HTC-related literature. A Sobol analysis was subsequently conducted to identify parameters that most influence hydrochar characteristics. Results from this analysis indicate that for each investigated hydrochar property, the model fit and predictive capability associated with the random forest models is superior to both the linear and regression tree models. Based on results from the Sobol analysis, the feedstock properties and process conditions most influential on hydrochar yield, carbon content, and energy content were identified. In addition, a variational process parameter sensitivity analysis was conducted to determine how feedstock property importance changes with process conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Balsam, Joshua; Bruck, Hugh Alan; Kostov, Yordan; Rasooly, Avraham
2012-01-01
Optical technologies are important for biological analysis. Current biomedical optical analyses rely on high-cost, high-sensitivity optical detectors such as photomultipliers, avalanched photodiodes or cooled CCD cameras. In contrast, Webcams, mobile phones and other popular consumer electronics use lower-sensitivity, lower-cost optical components such as photodiodes or CMOS sensors. In order for consumer electronics devices, such as webcams, to be useful for biomedical analysis, they must have increased sensitivity. We combined two strategies to increase the sensitivity of CMOS-based fluorescence detector. We captured hundreds of low sensitivity images using a Webcam in video mode, instead of a single image typically used in cooled CCD devices.We then used a computational approach consisting of an image stacking algorithm to remove the noise by combining all of the images into a single image. While video mode is widely used for dynamic scene imaging (e.g. movies or time-lapse photography), it is not used to capture a single static image, which removes noise and increases sensitivity by more than thirty fold. The portable, battery-operated Webcam-based fluorometer system developed here consists of five modules: (1) a low cost CMOS Webcam to monitor light emission, (2) a plate to perform assays, (3) filters and multi-wavelength LED illuminator for fluorophore excitation, (4) a portable computer to acquire and analyze images, and (5) image stacking software for image enhancement. The samples consisted of various concentrations of fluorescein, ranging from 30 μM to 1000 μM, in a 36-well miniature plate. In the single frame mode, the fluorometer's limit-of-detection (LOD) for fluorescein is ∼1000 μM, which is relatively insensitive. However, when used in video mode combined with image stacking enhancement, the LOD is dramatically reduced to 30 μM, sensitivity which is similar to that of state-of-the-art ELISA plate photomultiplier-based readers. Numerous medical diagnostics assays rely on optical and fluorescence readers. Our novel combination of detection technologies, which is new to biodetection may enable the development of new low cost optical detectors based on an inexpensive Webcam (<$10). It has the potential to form the basis for high sensitivity, low cost medical diagnostics in resource-poor settings.
Balsam, Joshua; Bruck, Hugh Alan; Kostov, Yordan; Rasooly, Avraham
2013-01-01
Optical technologies are important for biological analysis. Current biomedical optical analyses rely on high-cost, high-sensitivity optical detectors such as photomultipliers, avalanched photodiodes or cooled CCD cameras. In contrast, Webcams, mobile phones and other popular consumer electronics use lower-sensitivity, lower-cost optical components such as photodiodes or CMOS sensors. In order for consumer electronics devices, such as webcams, to be useful for biomedical analysis, they must have increased sensitivity. We combined two strategies to increase the sensitivity of CMOS-based fluorescence detector. We captured hundreds of low sensitivity images using a Webcam in video mode, instead of a single image typically used in cooled CCD devices.We then used a computational approach consisting of an image stacking algorithm to remove the noise by combining all of the images into a single image. While video mode is widely used for dynamic scene imaging (e.g. movies or time-lapse photography), it is not used to capture a single static image, which removes noise and increases sensitivity by more than thirty fold. The portable, battery-operated Webcam-based fluorometer system developed here consists of five modules: (1) a low cost CMOS Webcam to monitor light emission, (2) a plate to perform assays, (3) filters and multi-wavelength LED illuminator for fluorophore excitation, (4) a portable computer to acquire and analyze images, and (5) image stacking software for image enhancement. The samples consisted of various concentrations of fluorescein, ranging from 30 μM to 1000 μM, in a 36-well miniature plate. In the single frame mode, the fluorometer's limit-of-detection (LOD) for fluorescein is ∼1000 μM, which is relatively insensitive. However, when used in video mode combined with image stacking enhancement, the LOD is dramatically reduced to 30 μM, sensitivity which is similar to that of state-of-the-art ELISA plate photomultiplier-based readers. Numerous medical diagnostics assays rely on optical and fluorescence readers. Our novel combination of detection technologies, which is new to biodetection may enable the development of new low cost optical detectors based on an inexpensive Webcam (<$10). It has the potential to form the basis for high sensitivity, low cost medical diagnostics in resource-poor settings. PMID:23990697
Low maternal sensitivity at 6 months of age predicts higher BMI in 48 month old girls but not boys.
Wendland, Barbara E; Atkinson, Leslie; Steiner, Meir; Fleming, Alison S; Pencharz, Paul; Moss, Ellen; Gaudreau, Hélène; Silveira, Patricia P; Arenovich, Tamara; Matthews, Stephen G; Meaney, Michael J; Levitan, Robert D
2014-11-01
Large population-based studies suggest that systematic measures of maternal sensitivity predict later risk for overweight and obesity. More work is needed to establish the developmental timing and potential moderators of this association. The current study examined the association between maternal sensitivity at 6 months of age and BMI z score measures at 48 months of age, and whether sex moderated this association. Longitudinal Canadian cohort of children from birth (the MAVAN project). This analysis was based on a dataset of 223 children (115 boys, 108 girls) who had structured assessments of maternal sensitivity at 6 months of age and 48-month BMI data available. Mother-child interactions were videotaped and systematically scored using the Maternal Behaviour Q-Sort (MBQS)-25 items, a standardized measure of maternal sensitivity. Linear mixed-effects models and logistic regression examined whether MBQS scores at 6 months predicted BMI at 48 months, controlling for other covariates. After controlling for weight-relevant covariates, there was a significant sex by MBQS interaction (P=0.015) in predicting 48 month BMI z. Further analysis revealed a strong negative association between MBQS scores and BMI in girls (P=0.01) but not boys (P=0.72). Logistic regression confirmed that in girls only, low maternal sensitivity was associated with the higher BMI categories as defined by the WHO (i.e. "at risk for overweight" or above). A significant association between low maternal sensitivity at 6 months of age and high body mass indices was found in girls but not boys at 48 months of age. These data suggest for the first time that the link between low maternal sensitivity and early BMI z may differ between boys and girls. Copyright © 2014 Elsevier Ltd. All rights reserved.
Jiang, Hu; Li, Xiangmin; Xiong, Ying; Pei, Ke; Nie, Lijuan; Xiong, Yonghua
2017-02-28
A silver nanoparticle (AgNP)-based fluorescence-quenching lateral flow immunoassay with competitive format (cLFIA) was developed for sensitive detection of ochratoxin A (OTA) in grape juice and wine samples in the present study. The Ru(phen) 3 2 + -doped silica nanoparticles (RuNPs) were sprayed on the test and control line zones as background fluorescence signals. The AgNPs were designed as the fluorescence quenchers of RuNPs because they can block the exciting light transferring to the RuNP molecules. The proposed method exhibited high sensitivity for OTA detection, with a detection limit of 0.06 µg/L under optimized conditions. The method also exhibited a good linear range for OTA quantitative analysis from 0.08 µg/L to 5.0 µg/L. The reliability of the fluorescence-quenching cLFIA method was evaluated through analysis of the OTA-spiked red grape wine and juice samples. The average recoveries ranged from 88.0% to 110.0% in red grape wine and from 92.0% to 110.0% in grape juice. Meanwhile, less than a 10% coefficient variation indicated an acceptable precision of the cLFIA method. In summary, the new AgNP-based fluorescence-quenching cLFIA is a simple, rapid, sensitive, and accurate method for quantitative detection of OTA in grape juice and wine or other foodstuffs.
Jiang, Hu; Li, Xiangmin; Xiong, Ying; Pei, Ke; Nie, Lijuan; Xiong, Yonghua
2017-01-01
A silver nanoparticle (AgNP)-based fluorescence-quenching lateral flow immunoassay with competitive format (cLFIA) was developed for sensitive detection of ochratoxin A (OTA) in grape juice and wine samples in the present study. The Ru(phen)32+-doped silica nanoparticles (RuNPs) were sprayed on the test and control line zones as background fluorescence signals. The AgNPs were designed as the fluorescence quenchers of RuNPs because they can block the exciting light transferring to the RuNP molecules. The proposed method exhibited high sensitivity for OTA detection, with a detection limit of 0.06 µg/L under optimized conditions. The method also exhibited a good linear range for OTA quantitative analysis from 0.08 µg/L to 5.0 µg/L. The reliability of the fluorescence-quenching cLFIA method was evaluated through analysis of the OTA-spiked red grape wine and juice samples. The average recoveries ranged from 88.0% to 110.0% in red grape wine and from 92.0% to 110.0% in grape juice. Meanwhile, less than a 10% coefficient variation indicated an acceptable precision of the cLFIA method. In summary, the new AgNP-based fluorescence-quenching cLFIA is a simple, rapid, sensitive, and accurate method for quantitative detection of OTA in grape juice and wine or other foodstuffs. PMID:28264472
van Voorn, George A. K.; Ligtenberg, Arend; Molenaar, Jaap
2017-01-01
Adaptation of agents through learning or evolution is an important component of the resilience of Complex Adaptive Systems (CAS). Without adaptation, the flexibility of such systems to cope with outside pressures would be much lower. To study the capabilities of CAS to adapt, social simulations with agent-based models (ABMs) provide a helpful tool. However, the value of ABMs for studying adaptation depends on the availability of methodologies for sensitivity analysis that can quantify resilience and adaptation in ABMs. In this paper we propose a sensitivity analysis methodology that is based on comparing time-dependent probability density functions of output of ABMs with and without agent adaptation. The differences between the probability density functions are quantified by the so-called earth-mover’s distance. We use this sensitivity analysis methodology to quantify the probability of occurrence of critical transitions and other long-term effects of agent adaptation. To test the potential of this new approach, it is used to analyse the resilience of an ABM of adaptive agents competing for a common-pool resource. Adaptation is shown to contribute positively to the resilience of this ABM. If adaptation proceeds sufficiently fast, it may delay or avert the collapse of this system. PMID:28196372
Resilience through adaptation.
Ten Broeke, Guus A; van Voorn, George A K; Ligtenberg, Arend; Molenaar, Jaap
2017-01-01
Adaptation of agents through learning or evolution is an important component of the resilience of Complex Adaptive Systems (CAS). Without adaptation, the flexibility of such systems to cope with outside pressures would be much lower. To study the capabilities of CAS to adapt, social simulations with agent-based models (ABMs) provide a helpful tool. However, the value of ABMs for studying adaptation depends on the availability of methodologies for sensitivity analysis that can quantify resilience and adaptation in ABMs. In this paper we propose a sensitivity analysis methodology that is based on comparing time-dependent probability density functions of output of ABMs with and without agent adaptation. The differences between the probability density functions are quantified by the so-called earth-mover's distance. We use this sensitivity analysis methodology to quantify the probability of occurrence of critical transitions and other long-term effects of agent adaptation. To test the potential of this new approach, it is used to analyse the resilience of an ABM of adaptive agents competing for a common-pool resource. Adaptation is shown to contribute positively to the resilience of this ABM. If adaptation proceeds sufficiently fast, it may delay or avert the collapse of this system.
Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Congedo, Pietro; Abgrall, Remi
2014-11-01
Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.
Performance of the dipstick screening test as a predictor of negative urine culture
Marques, Alexandre Gimenes; Doi, André Mario; Pasternak, Jacyr; Damascena, Márcio dos Santos; França, Carolina Nunes; Martino, Marinês Dalla Valle
2017-01-01
ABSTRACT Objective To investigate whether the urine dipstick screening test can be used to predict urine culture results. Methods A retrospective study conducted between January and December 2014 based on data from 8,587 patients with a medical order for urine dipstick test, urine sediment analysis and urine culture. Sensitivity, specificity, positive and negative predictive values were determined and ROC curve analysis was performed. Results The percentage of positive cultures was 17.5%. Nitrite had 28% sensitivity and 99% specificity, with positive and negative predictive values of 89% and 87%, respectively. Leukocyte esterase had 79% sensitivity and 84% specificity, with positive and negative predictive values of 51% and 95%, respectively. The combination of positive nitrite or positive leukocyte esterase tests had 85% sensitivity and 84% specificity, with positive and negative predictive values of 53% and 96%, respectively. Positive urinary sediment (more than ten leukocytes per microliter) had 92% sensitivity and 71% specificity, with positive and negative predictive values of 40% and 98%, respectively. The combination of nitrite positive test and positive urinary sediment had 82% sensitivity and 99% specificity, with positive and negative predictive values of 91% and 98%, respectively. The combination of nitrite or leukocyte esterase positive tests and positive urinary sediment had the highest sensitivity (94%) and specificity (84%), with positive and negative predictive values of 58% and 99%, respectively. Based on ROC curve analysis, the best indicator of positive urine culture was the combination of positives leukocyte esterase or nitrite tests and positive urinary sediment, followed by positives leukocyte and nitrite tests, positive urinary sediment alone, positive leukocyte esterase test alone, positive nitrite test alone and finally association of positives nitrite and urinary sediment (AUC: 0.845, 0.844, 0.817, 0.814, 0.635 and 0.626, respectively). Conclusion A negative urine culture can be predicted by negative dipstick test results. Therefore, this test may be a reliable predictor of negative urine culture. PMID:28444086
Robles, A; Ruano, M V; Ribes, J; Seco, A; Ferrer, J
2014-04-01
The results of a global sensitivity analysis of a filtration model for submerged anaerobic MBRs (AnMBRs) are assessed in this paper. This study aimed to (1) identify the less- (or non-) influential factors of the model in order to facilitate model calibration and (2) validate the modelling approach (i.e. to determine the need for each of the proposed factors to be included in the model). The sensitivity analysis was conducted using a revised version of the Morris screening method. The dynamic simulations were conducted using long-term data obtained from an AnMBR plant fitted with industrial-scale hollow-fibre membranes. Of the 14 factors in the model, six were identified as influential, i.e. those calibrated using off-line protocols. A dynamic calibration (based on optimisation algorithms) of these influential factors was conducted. The resulting estimated model factors accurately predicted membrane performance. Copyright © 2014 Elsevier Ltd. All rights reserved.
Sensitivity Analysis of OECD Benchmark Tests in BISON
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.
2015-09-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining coremore » boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.« less
Optimization Issues with Complex Rotorcraft Comprehensive Analysis
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Tarzanin, Frank J.; Hirsh, Joel E.; Young, Darrell K.
1998-01-01
This paper investigates the use of the general purpose automatic differentiation (AD) tool called Automatic Differentiation of FORTRAN (ADIFOR) as a means of generating sensitivity derivatives for use in Boeing Helicopter's proprietary comprehensive rotor analysis code (VII). ADIFOR transforms an existing computer program into a new program that performs a sensitivity analysis in addition to the original analysis. In this study both the pros (exact derivatives, no step-size problems) and cons (more CPU, more memory) of ADIFOR are discussed. The size (based on the number of lines) of the VII code after ADIFOR processing increased by 70 percent and resulted in substantial computer memory requirements at execution. The ADIFOR derivatives took about 75 percent longer to compute than the finite-difference derivatives. However, the ADIFOR derivatives are exact and are not functions of step-size. The VII sensitivity derivatives generated by ADIFOR are compared with finite-difference derivatives. The ADIFOR and finite-difference derivatives are used in three optimization schemes to solve a low vibration rotor design problem.
Lee, Sangyeop; Choi, Junghyun; Chen, Lingxin; Park, Byungchoon; Kyong, Jin Burm; Seong, Gi Hun; Choo, Jaebum; Lee, Yeonjung; Shin, Kyung-Hoon; Lee, Eun Kyu; Joo, Sang-Woo; Lee, Kyeong-Hee
2007-05-08
A rapid and highly sensitive trace analysis technique for determining malachite green (MG) in a polydimethylsiloxane (PDMS) microfluidic sensor was investigated using surface-enhanced Raman spectroscopy (SERS). A zigzag-shaped PDMS microfluidic channel was fabricated for efficient mixing between MG analytes and aggregated silver colloids. Under the optimal condition of flow velocity, MG molecules were effectively adsorbed onto silver nanoparticles while flowing along the upper and lower zigzag-shaped PDMS channel. A quantitative analysis of MG was performed based on the measured peak height at 1615 cm(-1) in its SERS spectrum. The limit of detection, using the SERS microfluidic sensor, was found to be below the 1-2 ppb level and this low detection limit is comparable to the result of the LC-Mass detection method. In the present study, we introduce a new conceptual detection technology, using a SERS microfluidic sensor, for the highly sensitive trace analysis of MG in water.
Natsch, Andreas; Gfeller, Hans
2008-12-01
A key step in the skin sensitization process is the formation of a covalent adduct between skin sensitizers and endogenous proteins and/or peptides in the skin. Based on this mechanistic understanding, there is a renewed interest in in vitro assays to determine the reactivity of chemicals toward peptides in order to predict their sensitization potential. A standardized peptide reactivity assay yielded a promising predictivity. This published assay is based on high-performance liquid chromatography with ultraviolet detection to quantify peptide depletion after incubation with test chemicals. We had observed that peptide depletion may be due to either adduct formation or peptide oxidation. Here we report a modified assay based on both liquid chromatography-mass spectrometry (LC-MS) analysis and detection of free thiol groups. This approach allows simultaneous determination of (1) peptide depletion, (2) peptide oxidation (dimerization), (3) adduct formation, and (4) thiol reactivity and thus generates a more detailed characterization of the reactivity of a molecule. Highly reactive molecules are further discriminated with a kinetic measure. The assay was validated on 80 chemicals. Peptide depletion could accurately be quantified both with LC-MS detection and depletion of thiol groups. The majority of the moderate/strong/extreme sensitizers formed detectable peptide adducts, but many sensitizers were also able to catalyze peptide oxidation. Whereas adduct formation was only observed for sensitizers, this oxidation reaction was also observed for two nonsensitizing fragrance aldehydes, indicating that peptide depletion might not always be regarded as sufficient evidence for rating a chemical as a sensitizer. Thus, this modified assay gives a more informed view of the peptide reactivity of chemicals to better predict their sensitization potential.
Lee, Yeonok; Wu, Hulin
2012-01-01
Differential equation models are widely used for the study of natural phenomena in many fields. The study usually involves unknown factors such as initial conditions and/or parameters. It is important to investigate the impact of unknown factors (parameters and initial conditions) on model outputs in order to better understand the system the model represents. Apportioning the uncertainty (variation) of output variables of a model according to the input factors is referred to as sensitivity analysis. In this paper, we focus on the global sensitivity analysis of ordinary differential equation (ODE) models over a time period using the multivariate adaptive regression spline (MARS) as a meta model based on the concept of the variance of conditional expectation (VCE). We suggest to evaluate the VCE analytically using the MARS model structure of univariate tensor-product functions which is more computationally efficient. Our simulation studies show that the MARS model approach performs very well and helps to significantly reduce the computational cost. We present an application example of sensitivity analysis of ODE models for influenza infection to further illustrate the usefulness of the proposed method.
Rastogi, S C; Lepoittevin, J P; Johansen, J D; Frosch, P J; Menné, T; Bruze, M; Dreier, B; Andersen, K E; White, I R
1998-12-01
Deodorants are one of the most frequently-used types of cosmetics and are a source of allergic contact dermatitis. Therefore, a gas chromatography - mass spectrometric analysis of 71 deodorants was performed for identification of fragrance and non-fragrance materials present in marketed deodorants. Futhermore, the sensitizing potential of these molecules was evaluated using structure activity relationships (SARs) analysis. This was based on the presence of 1 or more chemically reactive site(s), in the chemical structure, associated with sensitizing potential. Among the many different substances used to formulate cosmetic products (over 3500), 226 chemicals were identified in a sample of 71 deodorants. 84 molecules were found to contain at least 1 structural alert, and 70 to belong to, or be susceptible to being metabolized into, the chemical group of aldehydes, ketones and alpha,beta-unsaturated aldehydes, ketone or esters. The combination of GC-MS and SARs analysis could be helpful in the selection of substances for supplementary investigations regarding sensitizing properties. Thus, it may be a valuable tool in the management of contact allergy to deodorants and for producing new deodorants with decreased propensity to cause contact allergy.
Amorim, Edilberto; Williamson, Craig A; Moura, Lidia M V R; Shafi, Mouhsin M; Gaspard, Nicolas; Rosenthal, Eric S; Guanci, Mary M; Rajajee, Venkatakrishna; Westover, M Brandon
2017-07-01
Continuous EEG screening using spectrograms or compressed spectral arrays (CSAs) by neurophysiologists has shorter review times with minimal loss of sensitivity for seizure detection when compared with visual analysis of raw EEG. Limited data are available on the performance characteristics of CSA-based seizure detection by neurocritical care nurses. This is a prospective cross-sectional study that was conducted in two academic neurocritical care units and involved 33 neurointensive care unit nurses and four neurophysiologists. All nurses underwent a brief training session before testing. Forty two-hour CSA segments of continuous EEG were reviewed and rated for the presence of seizures. Two experienced clinical neurophysiologists masked to the CSA data performed conventional visual analysis of the raw EEG and served as the gold standard. The overall accuracy was 55.7% among nurses and 67.5% among neurophysiologists. Nurse seizure detection sensitivity was 73.8%, and the false-positive rate was 1-per-3.2 hours. Sensitivity and false-alarm rate for the neurophysiologists was 66.3% and 1-per-6.4 hours, respectively. Interrater agreement for seizure screening was fair for nurses (Gwet AC1 statistic: 43.4%) and neurophysiologists (AC1: 46.3%). Training nurses to perform seizure screening utilizing continuous EEG CSA displays is feasible and associated with moderate sensitivity. Nurses and neurophysiologists had comparable sensitivities, but nurses had a higher false-positive rate. Further work is needed to improve sensitivity and reduce false-alarm rates.
Kageyama, Shinji; Shinmura, Kazuya; Yamamoto, Hiroko; Goto, Masanori; Suzuki, Koichi; Tanioka, Fumihiko; Tsuneyoshi, Toshihiro; Sugimura, Haruhiko
2008-04-01
The PCR-based DNA fingerprinting method called the methylation-sensitive amplified fragment length polymorphism (MS-AFLP) analysis is used for genome-wide scanning of methylation status. In this study, we developed a method of fluorescence-labeled MS-AFLP (FL-MS-AFLP) analysis by applying a fluorescence-labeled primer and fluorescence-detecting electrophoresis apparatus to the existing method of MS-AFLP analysis. The FL-MS-AFLP analysis enables quantitative evaluation of more than 350 random CpG loci per run. It was shown to allow evaluation of the differences in methylation level of blood DNA of gastric cancer patients and evaluation of hypermethylation and hypomethylation in DNA from gastric cancer tissue in comparison with adjacent non-cancerous tissue.
Burmeister, T; Maurer, J; Aivado, M; Elmaagacli, A H; Grünebach, F; Held, K R; Hess, G; Hochhaus, A; Höppner, W; Lentes, K U; Lübbert, M; Schäfer, K L; Schafhausen, P; Schmidt, C A; Schüler, F; Seeger, K; Seelig, R; Thiede, C; Viehmann, S; Weber, C; Wilhelm, S; Christmann, A; Clement, J H; Ebener, U; Enczmann, J; Leo, R; Schleuning, M; Schoch, R; Thiel, E
2000-10-01
Here we describe the results of an interlaboratory test for RT-PCR-based BCR/ABL analysis. The test was organized in two parts. The number of participating laboratories in the first and second part was 27 and 20, respectively. In the first part samples containing various concentrations of plasmids with the ela2, b2a2 or b3a2 BCR/ABL transcripts were analyzed by PCR. In the second part of the test, cell samples containing various concentrations of BCR/ABL-positive cells were analyzed by RT-PCR. Overall PCR sensitivity was sufficient in approximately 90% of the tests, but a significant number of false positive results were obtained. There were significant differences in sensitivity in the cell-based analysis between the various participants. The results are discussed, and proposals are made regarding the choice of primers, controls, conditions for RNA extraction and reverse transcription.
NASA Astrophysics Data System (ADS)
Kala, Zdeněk; Kala, Jiří
2011-09-01
The main focus of the paper is the analysis of the influence of residual stress on the ultimate limit state of a hot-rolled member in compression. The member was modelled using thin-walled elements of type SHELL 181 and meshed in the programme ANSYS. Geometrical and material non-linear analysis was used. The influence of residual stress was studied using variance-based sensitivity analysis. In order to obtain more general results, the non-dimensional slenderness was selected as a study parameter. Comparison of the influence of the residual stress with the influence of other dominant imperfections is illustrated in the conclusion of the paper. All input random variables were considered according to results of experimental research.
Diagnostic staging laparoscopy in gastric cancer treatment: A cost-effectiveness analysis.
Li, Kevin; Cannon, John G D; Jiang, Sam Y; Sambare, Tanmaya D; Owens, Douglas K; Bendavid, Eran; Poultsides, George A
2018-05-01
Accurate preoperative staging helps avert morbidity, mortality, and cost associated with non-therapeutic laparotomy in gastric cancer (GC) patients. Diagnostic staging laparoscopy (DSL) can detect metastases with high sensitivity, but its cost-effectiveness has not been previously studied. We developed a decision analysis model to assess the cost-effectiveness of preoperative DSL in GC workup. Analysis was based on a hypothetical cohort of GC patients in the U.S. for whom initial imaging shows no metastases. The cost-effectiveness of DSL was measured as cost per quality-adjusted life-year (QALY) gained. Drivers of cost-effectiveness were assessed in sensitivity analysis. Preoperative DSL required an investment of $107 012 per QALY. In sensitivity analysis, DSL became cost-effective at a threshold of $100 000/QALY when the probability of occult metastases exceeded 31.5% or when test sensitivity for metastases exceeded 86.3%. The likelihood of cost-effectiveness increased from 46% to 93% when both parameters were set at maximum reported values. The cost-effectiveness of DSL for GC patients is highly dependent on patient and test characteristics, and is more likely when DSL is used selectively where procedure yield is high, such as for locally advanced disease or in detecting peritoneal and superficial versus deep liver lesions. © 2017 Wiley Periodicals, Inc.
Sensitivity Analysis in Engineering
NASA Technical Reports Server (NTRS)
Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)
1987-01-01
The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.
Fully automated screening of immunocytochemically stained specimens for early cancer detection
NASA Astrophysics Data System (ADS)
Bell, André A.; Schneider, Timna E.; Müller-Frank, Dirk A. C.; Meyer-Ebrecht, Dietrich; Böcking, Alfred; Aach, Til
2007-03-01
Cytopathological cancer diagnoses can be obtained less invasive than histopathological investigations. Cells containing specimens can be obtained without pain or discomfort, bloody biopsies are avoided, and the diagnosis can, in some cases, even be made earlier. Since no tissue biopsies are necessary these methods can also be used in screening applications, e.g., for cervical cancer. Among the cytopathological methods a diagnosis based on the analysis of the amount of DNA in individual cells achieves high sensitivity and specificity. Yet this analysis is time consuming, which is prohibitive for a screening application. Hence, it will be advantageous to retain, by a preceding selection step, only a subset of suspicious specimens. This can be achieved using highly sensitive immunocytochemical markers like p16 ink4a for preselection of suspicious cells and specimens. We present a method to fully automatically acquire images at distinct positions at cytological specimens using a conventional computer controlled microscope and an autofocus algorithm. Based on the thus obtained images we automatically detect p16 ink4a-positive objects. This detection in turn is based on an analysis of the color distribution of the p16 ink4a marker in the Lab-colorspace. A Gaussian-mixture-model is used to describe this distribution and the method described in this paper so far achieves a sensitivity of up to 90%.
Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling
Pastore, Giovanni; Swiler, L. P.; Hales, Jason D.; ...
2014-10-12
The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertaintymore » in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.« less
Stacked graphene nanofibers for electrochemical oxidation of DNA bases.
Ambrosi, Adriano; Pumera, Martin
2010-08-21
In this article, we show that stacked graphene nanofibers (SGNFs) demonstrate superior electrochemical performance for oxidation of DNA bases over carbon nanotubes (CNTs). This is due to an exceptionally high number of accessible graphene sheet edges on the surface of the nanofibers when compared to carbon nanotubes, as shown by transmission electron microscopy and Raman spectroscopy. The oxidation signals of adenine, guanine, cytosine, and thymine exhibit two to four times higher currents than on CNT-based electrodes. SGNFs also exhibit higher sensitivity than do edge-plane pyrolytic graphite, glassy carbon, or graphite microparticle-based electrodes. We also demonstrate that influenza A(H1N1)-related strands can be sensitively oxidized on SGNF-based electrodes, which could therefore be applied to label-free DNA analysis.
NASA Astrophysics Data System (ADS)
Feng, Shangyuan; Lin, Juqiang; Huang, Zufang; Chen, Guannan; Chen, Weisheng; Wang, Yue; Chen, Rong; Zeng, Haishan
2013-01-01
The capability of using silver nanoparticle based near-infrared surface enhanced Raman scattering (SERS) spectroscopy combined with principal component analysis (PCA) and linear discriminate analysis (LDA) to differentiate esophageal cancer tissue from normal tissue was presented. Significant differences in Raman intensities of prominent SERS bands were observed between normal and cancer tissues. PCA-LDA multivariate analysis of the measured tissue SERS spectra achieved diagnostic sensitivity of 90.9% and specificity of 97.8%. This exploratory study demonstrated great potential for developing label-free tissue SERS analysis into a clinical tool for esophageal cancer detection.
Metabolomic analysis of insulin resistance across different mouse strains and diets.
Stöckli, Jacqueline; Fisher-Wellman, Kelsey H; Chaudhuri, Rima; Zeng, Xiao-Yi; Fazakerley, Daniel J; Meoli, Christopher C; Thomas, Kristen C; Hoffman, Nolan J; Mangiafico, Salvatore P; Xirouchaki, Chrysovalantou E; Yang, Chieh-Hsin; Ilkayeva, Olga; Wong, Kari; Cooney, Gregory J; Andrikopoulos, Sofianos; Muoio, Deborah M; James, David E
2017-11-24
Insulin resistance is a major risk factor for many diseases. However, its underlying mechanism remains unclear in part because it is triggered by a complex relationship between multiple factors, including genes and the environment. Here, we used metabolomics combined with computational methods to identify factors that classified insulin resistance across individual mice derived from three different mouse strains fed two different diets. Three inbred ILSXISS strains were fed high-fat or chow diets and subjected to metabolic phenotyping and metabolomics analysis of skeletal muscle. There was significant metabolic heterogeneity between strains, diets, and individual animals. Distinct metabolites were changed with insulin resistance, diet, and between strains. Computational analysis revealed 113 metabolites that were correlated with metabolic phenotypes. Using these 113 metabolites, combined with machine learning to segregate mice based on insulin sensitivity, we identified C22:1-CoA, C2-carnitine, and C16-ceramide as the best classifiers. Strikingly, when these three metabolites were combined into one signature, they classified mice based on insulin sensitivity more accurately than each metabolite on its own or other published metabolic signatures. Furthermore, C22:1-CoA was 2.3-fold higher in insulin-resistant mice and correlated significantly with insulin resistance. We have identified a metabolomic signature composed of three functionally unrelated metabolites that accurately predicts whole-body insulin sensitivity across three mouse strains. These data indicate the power of simultaneous analysis of individual, genetic, and environmental variance in mice for identifying novel factors that accurately predict metabolic phenotypes like whole-body insulin sensitivity. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.
NASA Astrophysics Data System (ADS)
Xu, Zhida; Jiang, Jing; Wang, Xinhao; Han, Kevin; Ameen, Abid; Khan, Ibrahim; Chang, Te-Wei; Liu, Gang Logan
2016-03-01
We demonstrated a highly-sensitive, wafer-scale, highly-uniform plasmonic nano-mushroom substrate based on plastic for naked-eye plasmonic colorimetry and surface-enhanced Raman spectroscopy (SERS). We gave it the name FlexBrite. The dual-mode functionality of FlexBrite allows for label-free qualitative analysis by SERS with an enhancement factor (EF) of 108 and label-free quantitative analysis by naked-eye colorimetry with a sensitivity of 611 nm RIU-1. The SERS EF of FlexBrite in the wet state was found to be 4.81 × 108, 7 times stronger than in the dry state, making FlexBrite suitable for aqueous environments such as microfluid systems. The label-free detection of biotin-streptavidin interaction by both SERS and colorimetry was demonstrated with FlexBrite. The detection of trace amounts of the narcotic drug methamphetamine in drinking water by SERS was implemented with a handheld Raman spectrometer and FlexBrite. This plastic-based dual-mode nano-mushroom substrate has the potential to be used as a sensing platform for easy and fast analysis in chemical and biological assays.We demonstrated a highly-sensitive, wafer-scale, highly-uniform plasmonic nano-mushroom substrate based on plastic for naked-eye plasmonic colorimetry and surface-enhanced Raman spectroscopy (SERS). We gave it the name FlexBrite. The dual-mode functionality of FlexBrite allows for label-free qualitative analysis by SERS with an enhancement factor (EF) of 108 and label-free quantitative analysis by naked-eye colorimetry with a sensitivity of 611 nm RIU-1. The SERS EF of FlexBrite in the wet state was found to be 4.81 × 108, 7 times stronger than in the dry state, making FlexBrite suitable for aqueous environments such as microfluid systems. The label-free detection of biotin-streptavidin interaction by both SERS and colorimetry was demonstrated with FlexBrite. The detection of trace amounts of the narcotic drug methamphetamine in drinking water by SERS was implemented with a handheld Raman spectrometer and FlexBrite. This plastic-based dual-mode nano-mushroom substrate has the potential to be used as a sensing platform for easy and fast analysis in chemical and biological assays. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr08357e
McCarthy, David; Pulverer, Walter; Weinhaeusel, Andreas; Diago, Oscar R; Hogan, Daniel J; Ostertag, Derek; Hanna, Michelle M
2016-06-01
Development of a sensitive method for DNA methylation profiling and associated mutation detection in clinical samples. Formalin-fixed and paraffin-embedded tumors received by clinical laboratories often contain insufficient DNA for analysis with bisulfite or methylation sensitive restriction enzymes-based methods. To increase sensitivity, methyl-CpG DNA capture and Coupled Abscription PCR Signaling detection were combined in a new assay, MethylMeter(®). Gliomas were analyzed for MGMT methylation, glioma CpG island methylator phenotype and IDH1 R132H. MethylMeter had 100% assay success rate measuring all five biomarkers in formalin-fixed and paraffin-embedded tissue. MGMT methylation results were supported by survival and mRNA expression data. MethylMeter is a sensitive and quantitative method for multitarget DNA methylation profiling and associated mutation detection. The MethylMeter-based GliomaSTRAT assay measures methylation of four targets and one mutation to simultaneously grade gliomas and predict their response to temozolomide. This information is clinically valuable in management of gliomas.
Flexible nanopillar-based electrochemical sensors for genetic detection of foodborne pathogens
NASA Astrophysics Data System (ADS)
Park, Yoo Min; Lim, Sun Young; Jeong, Soon Woo; Song, Younseong; Bae, Nam Ho; Hong, Seok Bok; Choi, Bong Gill; Lee, Seok Jae; Lee, Kyoung G.
2018-06-01
Flexible and highly ordered nanopillar arrayed electrodes have brought great interest for many electrochemical applications, especially to the biosensors, because of its unique mechanical and topological properties. Herein, we report an advanced method to fabricate highly ordered nanopillar electrodes produced by soft-/photo-lithography and metal evaporation. The highly ordered nanopillar array exhibited the superior electrochemical and mechanical properties in regard with the wide space to response with electrolytes, enabling the sensitive analysis. As-prepared gold and silver electrodes on nanopillar arrays exhibit great and stable electrochemical performance to detect the amplified gene from foodborne pathogen of Escherichia coli O157:H7. Additionally, lightweight, flexible, and USB-connectable nanopillar-based electrochemical sensor platform improves the connectivity, portability, and sensitivity. Moreover, we successfully confirm the performance of genetic analysis using real food, specially designed intercalator, and amplified gene from foodborne pathogens with high reproducibility (6% standard deviation) and sensitivity (10 × 1.01 CFU) within 25 s based on the square wave voltammetry principle. This study confirmed excellent mechanical and chemical characteristics of nanopillar electrodes have a great and considerable electrochemical activity to apply as genetic biosensor platform in the fields of point-of-care testing (POCT).
Takeyoshi, Masahiro; Sawaki, Masakuni; Yamasaki, Kanji; Kimber, Ian
2003-09-30
The murine local lymph node assay (LLNA) is used for the identification of chemicals that have the potential to cause skin sensitization. However, it requires specific facility and handling procedures to accommodate a radioisotopic (RI) endpoint. We have developed non-radioisotopic (non-RI) endpoint of LLNA based on BrdU incorporation to avoid a use of RI. Although this alternative method appears viable in principle, it is somewhat less sensitive than the standard assay. In this study, we report investigations to determine the use of statistical analysis to improve the sensitivity of a non-RI LLNA procedure with alpha-hexylcinnamic aldehyde (HCA) in two separate experiments. Consequently, the alternative non-RI method required HCA concentrations of greater than 25% to elicit a positive response based on the criterion for classification as a skin sensitizer in the standard LLNA. Nevertheless, dose responses to HCA in the alternative method were consistent in both experiments and we examined whether the use of an endpoint based upon the statistical significance of induced changes in LNC turnover, rather than an SI of 3 or greater, might provide for additional sensitivity. The results reported here demonstrate that with HCA at least significant responses were, in each of two experiments, recorded following exposure of mice to 25% of HCA. These data suggest that this approach may be more satisfactory-at least when BrdU incorporation is measured. However, this modification of the LLNA is rather less sensitive than the standard method if employing statistical endpoint. Taken together the data reported here suggest that a modified LLNA in which BrdU is used in place of radioisotope incorporation shows some promise, but that in its present form, even with the use of a statistical endpoint, lacks some of the sensitivity of the standard method. The challenge is to develop strategies for further refinement of this approach.
Observability Analysis of a MEMS INS/GPS Integration System with Gyroscope G-Sensitivity Errors
Fan, Chen; Hu, Xiaoping; He, Xiaofeng; Tang, Kanghua; Luo, Bing
2014-01-01
Gyroscopes based on micro-electromechanical system (MEMS) technology suffer in high-dynamic applications due to obvious g-sensitivity errors. These errors can induce large biases in the gyroscope, which can directly affect the accuracy of attitude estimation in the integration of the inertial navigation system (INS) and the Global Positioning System (GPS). The observability determines the existence of solutions for compensating them. In this paper, we investigate the observability of the INS/GPS system with consideration of the g-sensitivity errors. In terms of two types of g-sensitivity coefficients matrix, we add them as estimated states to the Kalman filter and analyze the observability of three or nine elements of the coefficient matrix respectively. A global observable condition of the system is presented and validated. Experimental results indicate that all the estimated states, which include position, velocity, attitude, gyro and accelerometer bias, and g-sensitivity coefficients, could be made observable by maneuvering based on the conditions. Compared with the integration system without compensation for the g-sensitivity errors, the attitude accuracy is raised obviously. PMID:25171122
Observability analysis of a MEMS INS/GPS integration system with gyroscope G-sensitivity errors.
Fan, Chen; Hu, Xiaoping; He, Xiaofeng; Tang, Kanghua; Luo, Bing
2014-08-28
Gyroscopes based on micro-electromechanical system (MEMS) technology suffer in high-dynamic applications due to obvious g-sensitivity errors. These errors can induce large biases in the gyroscope, which can directly affect the accuracy of attitude estimation in the integration of the inertial navigation system (INS) and the Global Positioning System (GPS). The observability determines the existence of solutions for compensating them. In this paper, we investigate the observability of the INS/GPS system with consideration of the g-sensitivity errors. In terms of two types of g-sensitivity coefficients matrix, we add them as estimated states to the Kalman filter and analyze the observability of three or nine elements of the coefficient matrix respectively. A global observable condition of the system is presented and validated. Experimental results indicate that all the estimated states, which include position, velocity, attitude, gyro and accelerometer bias, and g-sensitivity coefficients, could be made observable by maneuvering based on the conditions. Compared with the integration system without compensation for the g-sensitivity errors, the attitude accuracy is raised obviously.
Geng, Haiqing; Chen, Fan; Wang, Zhiyuan; Liu, Jie; Xu, Weihua
2017-05-01
The purpose of this research is to establish an environmental management zoning for coal mining industry which is served as a basis for making environmental management policies. Based on the specific impacts of coal mining and regional characteristics of environment and resources, the ecological impact, water resources impact, and arable land impact are chose as the zoning indexes to construct the index system. The ecological sensitivity is graded into three levels of low, medium, and high according to analytical hierarchy processes and gray fixed weight clustering analysis, and the water resources sensitivity is divided into five levels of lower, low, medium, high, and higher according to the weighted sum of sub-indexes, while only the arable land sensitive zone was extracted on the basis of the ratio of arable land to the county or city. By combining the ecological sensitivity zoning and the water resources sensitive zoning and then overlapping the arable-sensitive areas, the mainland China is classified into six types of environmental management zones for coal mining except to the forbidden exploitation areas.
Wavelet-based analysis of gastric microcirculation in rats with ulcer bleedings
NASA Astrophysics Data System (ADS)
Pavlov, A. N.; Rodionov, M. A.; Pavlova, O. N.; Semyachkina-Glushkovskaya, O. V.; Berdnikova, V. A.; Kuznetsova, Ya. V.; Semyachkin-Glushkovskij, I. A.
2012-03-01
Studying of nitric oxide (NO) dependent mechanisms of regulation of microcirculation in a stomach can provide important diagnostic markers of the development of stress-induced ulcer bleedings. In this work we use a multiscale analysis based on the discrete wavelet-transform to characterize a latent stage of illness formation in rats. A higher sensitivity of stomach vessels to the NO-level in ill rats is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrivastava, Manish; Zhao, Chun; Easter, Richard C.
We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recentmore » work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance. This study highlights the large sensitivity of SOA loadings to the particle-phase transformation of SOA volatility, which is neglected in most previous models.« less
LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.
2000-01-01
A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).
Assessment of energy and economic performance of office building models: a case study
NASA Astrophysics Data System (ADS)
Song, X. Y.; Ye, C. T.; Li, H. S.; Wang, X. L.; Ma, W. B.
2016-08-01
Energy consumption of building accounts for more than 37.3% of total energy consumption while the proportion of energy-saving buildings is just 5% in China. In this paper, in order to save potential energy, an office building in Southern China was selected as a test example for energy consumption characteristics. The base building model was developed by TRNSYS software and validated against the recorded data from the field work in six days out of August-September in 2013. Sensitivity analysis was conducted for energy performance of building envelope retrofitting; five envelope parameters were analyzed for assessing the thermal responses. Results indicated that the key sensitivity factors were obtained for the heat-transfer coefficient of exterior walls (U-wall), infiltration rate and shading coefficient (SC), of which the sum sensitivity factor was about 89.32%. In addition, the results were evaluated in terms of energy and economic analysis. The analysis of sensitivity validated against some important results of previous studies. On the other hand, the cost-effective method improved the efficiency of investment management in building energy.
Mixed kernel function support vector regression for global sensitivity analysis
NASA Astrophysics Data System (ADS)
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
Hestekin, Christa N.; Lin, Jennifer S.; Senderowicz, Lionel; Jakupciak, John P.; O’Connell, Catherine; Rademaker, Alfred; Barron, Annelise E.
2012-01-01
Knowledge of the genetic changes that lead to disease has grown and continues to grow at a rapid pace. However, there is a need for clinical devices that can be used routinely to translate this knowledge into the treatment of patients. Use in a clinical setting requires high sensitivity and specificity (>97%) in order to prevent misdiagnoses. Single strand conformational polymorphism (SSCP) and heteroduplex analysis (HA) are two DNA-based, complementary methods for mutation detection that are inexpensive and relatively easy to implement. However, both methods are most commonly detected by slab gel electrophoresis, which can be labor-intensive, time-consuming, and often the methods are unable to produce high sensitivity and specificity without the use of multiple analysis conditions. Here we demonstrate the first blinded study using microchip electrophoresis-SSCP/HA. We demonstrate the ability of microchip electrophoresis-SSCP/HA to detect with 98% sensitivity and specificity >100 samples from the p53 gene exons 5–9 in a blinded study in an analysis time of less than 10 minutes. PMID:22002021
Cost of Equity Estimation in Fuel and Energy Sector Companies Based on CAPM
NASA Astrophysics Data System (ADS)
Kozieł, Diana; Pawłowski, Stanisław; Kustra, Arkadiusz
2018-03-01
The article presents cost of equity estimation of capital groups from the fuel and energy sector, listed at the Warsaw Stock Exchange, based on the Capital Asset Pricing Model (CAPM). The objective of the article was to perform a valuation of equity with the application of CAPM, based on actual financial data and stock exchange data and to carry out a sensitivity analysis of such cost, depending on the financing structure of the entity. The objective of the article formulated in this manner has determined its' structure. It focuses on presentation of substantive analyses related to the core of equity and methods of estimating its' costs, with special attention given to the CAPM. In the practical section, estimation of cost was performed according to the CAPM methodology, based on the example of leading fuel and energy companies, such as Tauron GE and PGE. Simultaneously, sensitivity analysis of such cost was performed depending on the structure of financing the company's operation.
NASA Astrophysics Data System (ADS)
Yun, Wanying; Lu, Zhenzhou; Jiang, Xian
2018-06-01
To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.
Recent developments in Förster resonance energy transfer (FRET) diagnostics using quantum dots.
Geißler, Daniel; Hildebrandt, Niko
2016-07-01
The exceptional photophysical properties and the nanometric dimensions of colloidal semiconductor quantum dots (QD) have strongly attracted the bioanalytical community over the last approximately 20 y. In particular, the integration of QDs in the analysis of biological components and interactions, and the related diagnostics using Förster resonance energy transfer (FRET), have allowed researchers to significantly improve and diversify fluorescence-based biosensing. In this TRENDS article, we review some recent developments in QD-FRET biosensing that have implemented this technology in electronic consumer products, multiplexed analysis, and detection without light excitation for diagnostic applications. In selected examples of smartphone-based imaging, single- and multistep FRET, steady-state and time-resolved spectroscopy, and bio/chemiluminescence detection of QDs used as both FRET donors and acceptors, we highlight the advantages of QD-based FRET biosensing for multiplexed and sensitive diagnostics. Graphical Abstract Quantum dots (QDs) can be applied as donors and/or acceptors for Förster resonance energy transfer- (FRET-) based biosensing for multiplexed and sensitive diagnostics in various assay formats.
Xu, Weijia; Ozer, Stuart; Gutell, Robin R
2009-01-01
With an increasingly large amount of sequences properly aligned, comparative sequence analysis can accurately identify not only common structures formed by standard base pairing but also new types of structural elements and constraints. However, traditional methods are too computationally expensive to perform well on large scale alignment and less effective with the sequences from diversified phylogenetic classifications. We propose a new approach that utilizes coevolutional rates among pairs of nucleotide positions using phylogenetic and evolutionary relationships of the organisms of aligned sequences. With a novel data schema to manage relevant information within a relational database, our method, implemented with a Microsoft SQL Server 2005, showed 90% sensitivity in identifying base pair interactions among 16S ribosomal RNA sequences from Bacteria, at a scale 40 times bigger and 50% better sensitivity than a previous study. The results also indicated covariation signals for a few sets of cross-strand base stacking pairs in secondary structure helices, and other subtle constraints in the RNA structure.
Xu, Weijia; Ozer, Stuart; Gutell, Robin R.
2010-01-01
With an increasingly large amount of sequences properly aligned, comparative sequence analysis can accurately identify not only common structures formed by standard base pairing but also new types of structural elements and constraints. However, traditional methods are too computationally expensive to perform well on large scale alignment and less effective with the sequences from diversified phylogenetic classifications. We propose a new approach that utilizes coevolutional rates among pairs of nucleotide positions using phylogenetic and evolutionary relationships of the organisms of aligned sequences. With a novel data schema to manage relevant information within a relational database, our method, implemented with a Microsoft SQL Server 2005, showed 90% sensitivity in identifying base pair interactions among 16S ribosomal RNA sequences from Bacteria, at a scale 40 times bigger and 50% better sensitivity than a previous study. The results also indicated covariation signals for a few sets of cross-strand base stacking pairs in secondary structure helices, and other subtle constraints in the RNA structure. PMID:20502534
Information Leakage Analysis by Abstract Interpretation
NASA Astrophysics Data System (ADS)
Zanioli, Matteo; Cortesi, Agostino
Protecting the confidentiality of information stored in a computer system or transmitted over a public network is a relevant problem in computer security. The approach of information flow analysis involves performing a static analysis of the program with the aim of proving that there will not be leaks of sensitive information. In this paper we propose a new domain that combines variable dependency analysis, based on propositional formulas, and variables' value analysis, based on polyhedra. The resulting analysis is strictly more accurate than the state of the art abstract interpretation based analyses for information leakage detection. Its modular construction allows to deal with the tradeoff between efficiency and accuracy by tuning the granularity of the abstraction and the complexity of the abstract operators.
Systems and methods for analyzing building operations sensor data
Mezic, Igor; Eisenhower, Bryan A.
2015-05-26
Systems and methods are disclosed for analyzing building sensor information and decomposing the information therein to a more manageable and more useful form. Certain embodiments integrate energy-based and spectral-based analysis methods with parameter sampling and uncertainty/sensitivity analysis to achieve a more comprehensive perspective of building behavior. The results of this analysis may be presented to a user via a plurality of visualizations and/or used to automatically adjust certain building operations. In certain embodiments, advanced spectral techniques, including Koopman-based operations, are employed to discern features from the collected building sensor data.
Jethwa, Pinakin R; Punia, Vineet; Patel, Tapan D; Duffis, E Jesus; Gandhi, Chirag D; Prestigiacomo, Charles J
2013-04-01
Recent studies have documented the high sensitivity of computed tomography angiography (CTA) in detecting a ruptured aneurysm in the presence of acute subarachnoid hemorrhage (SAH). The practice of digital subtraction angiography (DSA) when CTA does not reveal an aneurysm has thus been called into question. We examined this dilemma from a cost-effectiveness perspective by using current decision analysis techniques. A decision tree was created with the use of TreeAge Pro Suite 2012; in 1 arm, a CTA-negative SAH was followed up with DSA; in the other arm, patients were observed without further imaging. Based on literature review, costs and utilities were assigned to each potential outcome. Base-case and sensitivity analyses were performed to determine the cost-effectiveness of each strategy. A Monte Carlo simulation was then conducted by sampling each variable over a plausible distribution to evaluate the robustness of the model. With the use of a negative predictive value of 95.7% for CTA, observation was found to be the most cost-effective strategy ($6737/Quality Adjusted Life Year [QALY] vs $8460/QALY) in the base-case analysis. One-way sensitivity analysis demonstrated that DSA became the more cost-effective option if the negative predictive value of CTA fell below 93.72%. The Monte Carlo simulation produced an incremental cost-effectiveness ratio of $83 083/QALY. At the conventional willingness-to-pay threshold of $50 000/QALY, observation was the more cost-effective strategy in 83.6% of simulations. The decision to perform a DSA in CTA-negative SAH depends strongly on the sensitivity of CTA, and therefore must be evaluated at each center treating these types of patients. Given the high sensitivity of CTA reported in the current literature, performing DSA on all patients with CTA negative SAH may not be cost-effective at every institution.
Development of Multiobjective Optimization Techniques for Sonic Boom Minimization
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.
1996-01-01
A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.
Kanchanaketu, T; Sangduen, N; Toojinda, T; Hongtrakul, V
2012-04-13
Genetic analysis of 56 samples of Jatropha curcas L. collected from Thailand and other countries was performed using the methylation-sensitive amplification polymorphism (MSAP) technique. Nine primer combinations were used to generate MSAP fingerprints. When the data were interpreted as amplified fragment length polymorphism (AFLP) markers, 471 markers were scored. All 56 samples were classified into three major groups: γ-irradiated, non-toxic and toxic accessions. Genetic similarity among the samples was extremely high, ranging from 0.95 to 1.00, which indicated very low genetic diversity in this species. The MSAP fingerprint was further analyzed for DNA methylation polymorphisms. The results revealed differences in the DNA methylation level among the samples. However, the samples collected from saline areas and some species hybrids showed specific DNA methylation patterns. AFLP data were used, together with methylation-sensitive AFLP (MS-AFLP) data, to construct a phylogenetic tree, resulting in higher efficiency to distinguish the samples. This combined analysis separated samples previously grouped in the AFLP analysis. This analysis also distinguished some hybrids. Principal component analysis was also performed; the results confirmed the separation in the phylogenetic tree. Some polymorphic bands, involving both nucleotide and DNA methylation polymorphism, that differed between toxic and non-toxic samples were identified, cloned and sequenced. BLAST analysis of these fragments revealed differences in DNA methylation in some known genes and nucleotide polymorphism in chloroplast DNA. We conclude that MSAP is a powerful technique for the study of genetic diversity for organisms that have a narrow genetic base.
Sensitivity Challenge of Steep Transistors
NASA Astrophysics Data System (ADS)
Ilatikhameneh, Hesameddin; Ameen, Tarek A.; Chen, ChinYi; Klimeck, Gerhard; Rahman, Rajib
2018-04-01
Steep transistors are crucial in lowering power consumption of the integrated circuits. However, the difficulties in achieving steepness beyond the Boltzmann limit experimentally have hindered the fundamental challenges in application of these devices in integrated circuits. From a sensitivity perspective, an ideal switch should have a high sensitivity to the gate voltage and lower sensitivity to the device design parameters like oxide and body thicknesses. In this work, conventional tunnel-FET (TFET) and negative capacitance FET are shown to suffer from high sensitivity to device design parameters using full-band atomistic quantum transport simulations and analytical analysis. Although Dielectric Engineered (DE-) TFETs based on 2D materials show smaller sensitivity compared with the conventional TFETs, they have leakage issue. To mitigate this challenge, a novel DE-TFET design has been proposed and studied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less
Sun, Jiahong; Zhao, Min; Miao, Song; Xi, Bo
2016-01-01
Many studies have suggested that polymorphisms of three key genes (ACE, AGT and CYP11B2) in the renin-angiotensin-aldosterone system (RAAS) play important roles in the development of blood pressure (BP) salt sensitivity, but they have revealed inconsistent results. Thus, we performed a meta-analysis to clarify the association. PubMed and Embase databases were searched for eligible published articles. Fixed- or random-effect models were used to pool odds ratios and 95% confidence intervals based on whether there was significant heterogeneity between studies. In total, seven studies [237 salt-sensitive (SS) cases and 251 salt-resistant (SR) controls] for ACE gene I/D polymorphism, three studies (130 SS cases and 221 SR controls) for AGT gene M235T polymorphism and three studies (113 SS cases and 218 SR controls) for CYP11B2 gene C344T polymorphism were included in this meta-analysis. The results showed that there was no significant association between polymorphisms of these three polymorphisms in the RAAS and BP salt sensitivity under three genetic models (all p > 0.05). The meta-analysis suggested that three polymorphisms (ACE gene I/D, AGT gene M235T, CYP11B2 gene C344T) in the RAAS have no significant effect on BP salt sensitivity.
Stability and sensitivity of ABR flow control protocols
NASA Astrophysics Data System (ADS)
Tsai, Wie K.; Kim, Yuseok; Chiussi, Fabio; Toh, Chai-Keong
1998-10-01
This tutorial paper surveys the important issues in stability and sensitivity analysis of ABR flow control of ATM networks. THe stability and sensitivity issues are formulated in a systematic framework. Four main cause of instability in ABR flow control are identified: unstable control laws, temporal variations of available bandwidth with delayed feedback control, misbehaving components, and interactions between higher layer protocols and ABR flow control. Popular rate-based ABR flow control protocols are evaluated. Stability and sensitivity is shown to be the fundamental issues when the network has dynamically-varying bandwidth. Simulation result confirming the theoretical studies are provided. Open research problems are discussed.
Optimization for minimum sensitivity to uncertain parameters
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw
1994-01-01
A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.
NASA Astrophysics Data System (ADS)
Sun, Xiao-Yan; Chu, Dong-Kai; Dong, Xin-Ran; Zhou, Chu; Li, Hai-Tao; Luo-Zhi; Hu, You-Wang; Zhou, Jian-Ying; Cong-Wang; Duan, Ji-An
2016-03-01
A High sensitive refractive index (RI) sensor based on Mach-Zehnder interferometer (MZI) in a conventional single-mode optical fiber is proposed, which is fabricated by femtosecond laser transversal-scanning inscription method and chemical etching. A rectangular cavity structure is formed in part of fiber core and cladding interface. The MZI sensor shows excellent refractive index sensitivity and linearity, which exhibits an extremely high RI sensitivity of -17197 nm/RIU (refractive index unit) with the linearity of 0.9996 within the refractive index range of 1.3371-1.3407. The experimental results are consistent with theoretical analysis.
A new sensitivity analysis for structural optimization of composite rotor blades
NASA Technical Reports Server (NTRS)
Venkatesan, C.; Friedmann, P. P.; Yuan, Kuo-An
1993-01-01
This paper presents a detailed mathematical derivation of the sensitivity derivatives for the structural dynamic, aeroelastic stability and response characteristics of a rotor blade in hover and forward flight. The formulation is denoted by the term semianalytical approach, because certain derivatives have to be evaluated by a finite difference scheme. Using the present formulation, sensitivity derivatives for the structural dynamic and aeroelastic stability characteristics, were evaluated for both isotropic and composite rotor blades. Based on the results, useful conclusions are obtained regarding the relative merits of the semi-analytical approach, for calculating sensitivity derivatives, when compared to a pure finite difference approach.
Sensitivity analyses for sparse-data problems-using weakly informative bayesian priors.
Hamra, Ghassan B; MacLehose, Richard F; Cole, Stephen R
2013-03-01
Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist.
Sensitivity Analyses for Sparse-Data Problems—Using Weakly Informative Bayesian Priors
Hamra, Ghassan B.; MacLehose, Richard F.; Cole, Stephen R.
2013-01-01
Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist. PMID:23337241
Highly Sensitive Hot-Wire Anemometry Based on Macro-Sized Double-Walled Carbon Nanotube Strands.
Wang, Dingqu; Xiong, Wei; Zhou, Zhaoying; Zhu, Rong; Yang, Xing; Li, Weihua; Jiang, Yueyuan; Zhang, Yajun
2017-08-01
This paper presents a highly sensitive flow-rate sensor with carbon nanotubes (CNTs) as sensing elements. The sensor uses micro-size centimeters long double-walled CNT (DWCNT) strands as hot-wires to sense fluid velocity. In the theoretical analysis, the sensitivity of the sensor is demonstrated to be positively related to the ratio of its surface. We assemble the flow sensor by suspending the DWCNT strand directly on two tungsten prongs and dripping a small amount of silver glue onto each contact between the DWCNT and the prongs. The DWCNT exhibits a positive TCR of 1980 ppm/K. The self-heating effect on the DWCNT was observed while constant current was applied between the two prongs. This sensor can evidently respond to flow rate, and requires only several milliwatts to operate. We have, thus far, demonstrated that the CNT-based flow sensor has better sensitivity than the Pt-coated DWCNT sensor.
Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Mao, Lei; Jackson, Lisa
2016-10-01
In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.
NASA Astrophysics Data System (ADS)
Morandage, Shehan; Schnepf, Andrea; Vanderborght, Jan; Javaux, Mathieu; Leitner, Daniel; Laloy, Eric; Vereecken, Harry
2017-04-01
Root traits are increasingly important in breading of new crop varieties. E.g., longer and fewer lateral roots are suggested to improve drought resistance of wheat. Thus, detailed root architectural parameters are important. However, classical field sampling of roots only provides more aggregated information such as root length density (coring), root counts per area (trenches) or root arrival curves at certain depths (rhizotubes). We investigate the possibility of obtaining the information about root system architecture of plants using field based classical root sampling schemes, based on sensitivity analysis and inverse parameter estimation. This methodology was developed based on a virtual experiment where a root architectural model was used to simulate root system development in a field, parameterized for winter wheat. This information provided the ground truth which is normally unknown in a real field experiment. The three sampling schemes coring, trenching, and rhizotubes where virtually applied to and aggregated information computed. Morris OAT global sensitivity analysis method was then performed to determine the most sensitive parameters of root architecture model for the three different sampling methods. The estimated means and the standard deviation of elementary effects of a total number of 37 parameters were evaluated. Upper and lower bounds of the parameters were obtained based on literature and published data of winter wheat root architectural parameters. Root length density profiles of coring, arrival curve characteristics observed in rhizotubes, and root counts in grids of trench profile method were evaluated statistically to investigate the influence of each parameter using five different error functions. Number of branches, insertion angle inter-nodal distance, and elongation rates are the most sensitive parameters and the parameter sensitivity varies slightly with the depth. Most parameters and their interaction with the other parameters show highly nonlinear effect to the model output. The most sensitive parameters will be subject to inverse estimation from the virtual field sampling data using DREAMzs algorithm. The estimated parameters can then be compared with the ground truth in order to determine the suitability of the sampling schemes to identify specific traits or parameters of the root growth model.
Processing companies' preferences for attributes of beef in Switzerland.
Boesch, Irene
2014-01-01
The aim of this work was to assess processing companies' preferences for attributes of Swiss beef. To this end, qualitative interviews were used to derive product attributes that determine the buying decision. Through an adaptive-choice based conjoint analysis survey and latent class analysis of choice data, we compute class preferences. Results show that there are two distinct classes. A smaller class emphasizes traceability back to the birth farm and low producer price, a larger class focuses on environmental effects and origin. Additionally we see that larger companies are more price-sensitive and smaller companies are more sensitive to origin of the animals. The results outlined in this paper may be used to target market segments and to derive differentiation strategies based on product characteristics. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fernandez, Juan Carlos; Barnes, Cris William; Mocko, Michael Jeffrey
This report is intended to examine the use of neutron resonance spectroscopy (NRS) to make time- dependent and spatially-resolved temperature measurements of materials in extreme conditions. Specifically, the sensitivities of the temperature estimate on neutron-beam and diagnostic parameters is examined. Based on that examination, requirements are set on a pulsed neutron-source and diagnostics to make a meaningful measurement.
NASA Astrophysics Data System (ADS)
Leal-Junior, Arnaldo G.; Frizera, Anselmo; José Pontes, Maria
2018-03-01
Polymer optical fibers (POFs) are suitable for applications such as curvature sensors, strain, temperature, liquid level, among others. However, for enhancing sensitivity, many polymer optical fiber curvature sensors based on intensity variation require a lateral section. Lateral section length, depth, and surface roughness have great influence on the sensor sensitivity, hysteresis, and linearity. Moreover, the sensor curvature radius increase the stress on the fiber, which leads on variation of the sensor behavior. This paper presents the analysis relating the curvature radius and lateral section length, depth and surface roughness with the sensor sensitivity, hysteresis and linearity for a POF curvature sensor. Results show a strong correlation between the decision parameters behavior and the performance for sensor applications based on intensity variation. Furthermore, there is a trade-off among the sensitive zone length, depth, surface roughness, and curvature radius with the sensor desired performance parameters, which are minimum hysteresis, maximum sensitivity, and maximum linearity. The optimization of these parameters is applied to obtain a sensor with sensitivity of 20.9 mV/°, linearity of 0.9992 and hysteresis below 1%, which represent a better performance of the sensor when compared with the sensor without the optimization.
Langthorne, Paul; McGill, Peter; O'Reilly, Mark
2007-07-01
Sensitivity theory attempts to account for the variability often observed in challenging behavior by recourse to the "aberrant motivation" of people with intellectual and developmental disabilities. In this article, we suggest that a functional analysis based on environmental (challenging environments) and biological (challenging needs) motivating operations provides a more parsimonious and empirically grounded account of challenging behavior than that proposed by sensitivity theory. It is argued that the concept of the motivating operation provides a means of integrating diverse strands of research without the undue inference of mentalistic constructs. An integrated model of challenging behavior is proposed, one that remains compatible with the central tenets of functional analysis.
Jiang, Minghuan; You, Joyce H S
2017-10-01
Continuation of dual antiplatelet therapy (DAPT) beyond 1 year reduces late stent thrombosis and ischemic events after drug-eluting stents (DES) but increases risk of bleeding. We hypothesized that extending DAPT from 12 months to 30 months in patients with acute coronary syndrome (ACS) after DES is cost-effective. A lifelong decision-analytic model was designed to simulate 2 antiplatelet strategies in event-free ACS patients who had completed 12-month DAPT after DES: aspirin monotherapy (75-162 mg daily) and continuation of DAPT (clopidogrel 75 mg daily plus aspirin 75-162 mg daily) for 18 months. Clinical event rates, direct medical costs, and quality-adjusted life-years (QALYs) gained were the primary outcomes from the US healthcare provider perspective. Base-case results showed DAPT continuation gained higher QALYs (8.1769 vs 8.1582 QALYs) at lower cost (USD42 982 vs USD44 063). One-way sensitivity analysis found that base-case QALYs were sensitive to odds ratio (OR) of cardiovascular death with DAPT continuation and base-case cost was sensitive to OR of nonfatal stroke with DAPT continuation. DAPT continuation remained cost-effective when the ORs of nonfatal stroke and cardiovascular death were below 1.241 and 1.188, respectively. In probabilistic sensitivity analysis, DAPT continuation was the preferred strategy in 74.75% of 10 000 Monte Carlo simulations at willingness-to-pay threshold of 50 000 USD/QALYs. Continuation of DAPT appears to be cost-effective in ACS patients who were event-free for 12-month DAPT after DES. The cost-effectiveness of DAPT for 30 months was highly subject to the OR of nonfatal stroke and OR of death with DAPT continuation. © 2017 Wiley Periodicals, Inc.
Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model
NASA Astrophysics Data System (ADS)
Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.
2013-12-01
We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global sensitivity analysis results.
High-Speed Real-Time Resting-State fMRI Using Multi-Slab Echo-Volumar Imaging
Posse, Stefan; Ackley, Elena; Mutihac, Radu; Zhang, Tongsheng; Hummatov, Ruslan; Akhtari, Massoud; Chohan, Muhammad; Fisch, Bruce; Yonas, Howard
2013-01-01
We recently demonstrated that ultra-high-speed real-time fMRI using multi-slab echo-volumar imaging (MEVI) significantly increases sensitivity for mapping task-related activation and resting-state networks (RSNs) compared to echo-planar imaging (Posse et al., 2012). In the present study we characterize the sensitivity of MEVI for mapping RSN connectivity dynamics, comparing independent component analysis (ICA) and a novel seed-based connectivity analysis (SBCA) that combines sliding-window correlation analysis with meta-statistics. This SBCA approach is shown to minimize the effects of confounds, such as movement, and CSF and white matter signal changes, and enables real-time monitoring of RSN dynamics at time scales of tens of seconds. We demonstrate highly sensitive mapping of eloquent cortex in the vicinity of brain tumors and arterio-venous malformations, and detection of abnormal resting-state connectivity in epilepsy. In patients with motor impairment, resting-state fMRI provided focal localization of sensorimotor cortex compared with more diffuse activation in task-based fMRI. The fast acquisition speed of MEVI enabled segregation of cardiac-related signal pulsation using ICA, which revealed distinct regional differences in pulsation amplitude and waveform, elevated signal pulsation in patients with arterio-venous malformations and a trend toward reduced pulsatility in gray matter of patients compared with healthy controls. Mapping cardiac pulsation in cortical gray matter may carry important functional information that distinguishes healthy from diseased tissue vasculature. This novel fMRI methodology is particularly promising for mapping eloquent cortex in patients with neurological disease, having variable degree of cooperation in task-based fMRI. In conclusion, ultra-high-real-time speed fMRI enhances the sensitivity of mapping the dynamics of resting-state connectivity and cerebro-vascular pulsatility for clinical and neuroscience research applications. PMID:23986677
Lambert, Rod
2015-01-01
This article presents an evidence-based reasoning, focusing on evidence of an Occupational Therapy input to lifestyle behaviour influences on panic disorder that also provides potentially broader application across other mental health problems (MHP). The article begins from the premise that we are all different. It then follows through a sequence of questions, examining incrementally how MHPs are experienced and classified. It analyses the impact of individual sensitivity at different levels of analysis, from genetic and epigenetic individuality, through neurotransmitter and body system sensitivity. Examples are given demonstrating the evidence base behind the logical sequence of investigation. The paper considers the evidence of how everyday routine lifestyle behaviour impacts on occupational function at all levels, and how these behaviours link to individual sensitivity to influence the level of exposure required to elicit symptomatic responses. Occupational Therapists can help patients by adequately assessing individual sensitivity, and through promoting understanding and a sense of control over their own symptoms. It concludes that present clinical guidelines should be expanded to incorporate knowledge of individual sensitivities to environmental exposures and lifestyle behaviours at an early stage. PMID:26095868
Evaluation of fire weather forecasts using PM2.5 sensitivity analysis
NASA Astrophysics Data System (ADS)
Balachandran, Sivaraman; Baumann, Karsten; Pachon, Jorge E.; Mulholland, James A.; Russell, Armistead G.
2017-01-01
Fire weather forecasts are used by land and wildlife managers to determine when meteorological and fuel conditions are suitable to conduct prescribed burning. In this work, we investigate the sensitivity of ambient PM2.5 to various fire and meteorological variables in a spatial setting that is typical for the southeastern US, where prescribed fires are the single largest source of fine particulate matter. We use the method of principle components regression to estimate sensitivity of PM2.5, measured at a monitoring site in Jacksonville, NC (JVL), to fire data and observed and forecast meteorological variables. Fire data were gathered from prescribed fire activity used for ecological management at Marine Corps Base Camp Lejeune, extending 10-50 km south from the PM2.5 monitor. Principal components analysis (PCA) was run on 10 data sets that included acres of prescribed burning activity (PB) along with meteorological forecast data alone or in combination with observations. For each data set, observed PM2.5 (unitless) was regressed against PCA scores from the first seven principal components (explaining at least 80% of total variance). PM2.5 showed significant sensitivity to PB: 3.6 ± 2.2 μg m-3 per 1000 acres burned at the investigated distance scale of ∼10-50 km. Applying this sensitivity to the available activity data revealed a prescribed burning source contribution to measured PM2.5 of up to 25% on a given day. PM2.5 showed a positive sensitivity to relative humidity and temperature, and was also sensitive to wind direction, indicating the capture of more regional aerosol processing and transport effects. As expected, PM2.5 had a negative sensitivity to dispersive variables but only showed a statistically significant negative sensitivity to ventilation rate, highlighting the importance of this parameter to fire managers. A positive sensitivity to forecast precipitation was found, consistent with the practice of conducting prescribed burning on days when rain can naturally extinguish fires. Perhaps most importantly for land managers, our analysis suggests that instead of relying on the forecasts from a day before, prescribed burning decisions should be based on the forecasts released the morning of the burn when possible, since these data were more stable and yielded more statistically robust results.
Sinha, Shriprakash
2017-12-04
Ever since the accidental discovery of Wingless [Sharma R.P., Drosophila information service, 1973, 50, p 134], research in the field of Wnt signaling pathway has taken significant strides in wet lab experiments and various cancer clinical trials, augmented by recent developments in advanced computational modeling of the pathway. Information rich gene expression profiles reveal various aspects of the signaling pathway and help in studying different issues simultaneously. Hitherto, not many computational studies exist which incorporate the simultaneous study of these issues. This manuscript ∙ explores the strength of contributing factors in the signaling pathway, ∙ analyzes the existing causal relations among the inter/extracellular factors effecting the pathway based on prior biological knowledge and ∙ investigates the deviations in fold changes in the recently found prevalence of psychophysical laws working in the pathway. To achieve this goal, local and global sensitivity analysis is conducted on the (non)linear responses between the factors obtained from static and time series expression profiles using the density (Hilbert-Schmidt Information Criterion) and variance (Sobol) based sensitivity indices. The results show the advantage of using density based indices over variance based indices mainly due to the former's employment of distance measures & the kernel trick via Reproducing kernel Hilbert space (RKHS) that capture nonlinear relations among various intra/extracellular factors of the pathway in a higher dimensional space. In time series data, using these indices it is now possible to observe where in time, which factors get influenced & contribute to the pathway, as changes in concentration of the other factors are made. This synergy of prior biological knowledge, sensitivity analysis & representations in higher dimensional spaces can facilitate in time based administration of target therapeutic drugs & reveal hidden biological information within colorectal cancer samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jun; Liu, Guodong; Wu, Hong
2008-01-01
In this paper, we demonstrate an electrochemical high-throughput sensing platform for simple, sensitive detection of PSA based on QD labels. This sensing platform uses a microplate for immunoreactions and disposable screen-printed electrodes (SPE) for electrochemical stripping analysis of metal ions released from QD labels. With the 96-well microplate, capturing antibodies are conveniently immobilized to the well surface, and the process of immunoreaction is easily controlled. The formed sandwich complexes on the well surface are also easily isolated from reaction solutions. In particular, a microplate-based electrochemical assay can make it feasible to conduct a parallel analysis of several samples or multiplemore » protein markers. This assay offers a number of advantages including (1) simplicity, cost-effectiveness, (2) high sensitivity, (3) capability to sense multiple samples or targets in parallel, and (4) a potentially portable device with an SPE array implanted in the microplate. This PSA assay is sensitive because it uses two amplification processes: (1) QDs as a label for enhancing electrical signal since secondary antibodies are linked to QDs that contain a large number of metal atoms and (2) there is inherent signal amplification for electrochemical stripping analysis—preconcentration of metal ion onto the electrode surface for amplifying electrical signals. Therefore, the high sensitivity of this method, stemming from dual signal amplification via QD labels and pre-concentration, allows low concentration levels to be detected while using small sample volumes. Thus, this QD-based electrochemical detection approach offers a simple, rapid, cost-effective, and high throughput assay of PSA.« less
Structural optimization: Status and promise
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.
Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)
NASA Astrophysics Data System (ADS)
Meng, Fei; Shi, Peng; Karimi, Hamid Reza; Zhang, Hui
2016-02-01
The main objective of this paper is to investigate the sensitivity analysis and optimal design of a proportional solenoid valve (PSV) operated pressure reducing valve (PRV) for heavy-duty automatic transmission clutch actuators. The nonlinear electro-hydraulic valve model is developed based on fluid dynamics. In order to implement the sensitivity analysis and optimization for the PRV, the PSV model is validated by comparing the results with data obtained from a real test-bench. The sensitivity of the PSV pressure response with regard to the structural parameters is investigated by using Sobol's method. Finally, simulations and experimental investigations are performed on the optimized prototype and the results reveal that the dynamical characteristics of the valve have been improved in comparison with the original valve.
Jet-A reaction mechanism study for combustion application
NASA Technical Reports Server (NTRS)
Lee, Chi-Ming; Kundu, Krishna; Acosta, Waldo
1991-01-01
Simplified chemical kinetic reaction mechanisms for the combustion of Jet A fuel was studied. Initially, 40 reacting species and 118 elementary chemical reactions were chosen based on a literature review. Through a sensitivity analysis with the use of LSENS General Kinetics and Sensitivity Analysis Code, 16 species and 21 elementary chemical reactions were determined from this study. This mechanism is first justified by comparison of calculated ignition delay time with the available shock tube data, then it is validated by comparison of calculated emissions from the plug flow reactor code with in-house flame tube data.
Nagel, Michael; Bolivar, Peter Haring; Brucherseifer, Martin; Kurz, Heinrich; Bosserhoff, Anja; Büttner, Reinhard
2002-04-01
A promising label-free approach for the analysis of genetic material by means of detecting the hybridization of polynucleotides with electromagnetic waves at terahertz (THz) frequencies is presented. Using an integrated waveguide approach, incorporating resonant THz structures as sample carriers and transducers for the analysis of the DNA molecules, we achieve a sensitivity down to femtomolar levels. The approach is demonstrated with time-domain ultrafast techniques based on femtosecond laser pulses for generating and electro-optically detecting broadband THz signals, although the principle can certainly be transferred to other THz technologies.
Monoallelic mutation analysis (MAMA) for identifying germline mutations.
Papadopoulos, N; Leach, F S; Kinzler, K W; Vogelstein, B
1995-09-01
Dissection of germline mutations in a sensitive and specific manner presents a continuing challenge. In dominantly inherited diseases, mutations occur in only one allele and are often masked by the normal allele. Here we report the development of a sensitive and specific diagnostic strategy based on somatic cell hybridization termed MAMA (monoallelic mutation analysis). We have demonstrated the utility of this strategy in two different hereditary colorectal cancer syndromes, one caused by a defective tumour suppressor gene on chromosome 5 (familial adenomatous polyposis, FAP) and the other caused by a defective mismatch repair gene on chromosome 2 (hereditary non-polyposis colorectal cancer, HNPCC).
Trame, MN; Lesko, LJ
2015-01-01
A systems pharmacology model typically integrates pharmacokinetic, biochemical network, and systems biology concepts into a unifying approach. It typically consists of a large number of parameters and reaction species that are interlinked based upon the underlying (patho)physiology and the mechanism of drug action. The more complex these models are, the greater the challenge of reliably identifying and estimating respective model parameters. Global sensitivity analysis provides an innovative tool that can meet this challenge. CPT Pharmacometrics Syst. Pharmacol. (2015) 4, 69–79; doi:10.1002/psp4.6; published online 25 February 2015 PMID:27548289
Almeida, Suzana C; George, Steven Z; Leite, Raquel D V; Oliveira, Anamaria S; Chaves, Thais C
2018-05-17
We aimed to empirically derive psychosocial and pain sensitivity subgroups using cluster analysis within a sample of individuals with chronic musculoskeletal pain (CMP) and to investigate derived subgroups for differences in pain and disability outcomes. Eighty female participants with CMP answered psychosocial and disability scales and were assessed for pressure pain sensitivity. A cluster analysis was used to derive subgroups, and analysis of variance (ANOVA) was used to investigate differences between subgroups. Psychosocial factors (kinesiophobia, pain catastrophizing, anxiety, and depression) and overall pressure pain threshold (PPT) were entered into the cluster analysis. Three subgroups were empirically derived: cluster 1 (high pain sensitivity and high psychosocial distress; n = 12) characterized by low overall PPT and high psychosocial scores; cluster 2 (high pain sensitivity and intermediate psychosocial distress; n = 39) characterized by low overall PPT and intermediate psychosocial scores; and cluster 3 (low pain sensitivity and low psychosocial distress; n = 29) characterized by high overall PPT and low psychosocial scores compared to the other subgroups. Cluster 1 showed higher values for mean pain intensity (F (2,77) = 10.58, p < 0.001) compared with cluster 3, and cluster 1 showed higher values for disability (F (2,77) = 3.81, p = 0.03) compared with both clusters 2 and 3. Only cluster 1 was distinct from cluster 3 according to both pain and disability outcomes. Pain catastrophizing, depression, and anxiety were the psychosocial variables that best differentiated the subgroups. Overall, these results call attention to the importance of considering pain sensitivity and psychosocial variables to obtain a more comprehensive characterization of CMP patients' subtypes.
Wu, Y.; Liu, S.
2012-01-01
Parameter optimization and uncertainty issues are a great challenge for the application of large environmental models like the Soil and Water Assessment Tool (SWAT), which is a physically-based hydrological model for simulating water and nutrient cycles at the watershed scale. In this study, we present a comprehensive modeling environment for SWAT, including automated calibration, and sensitivity and uncertainty analysis capabilities through integration with the R package Flexible Modeling Environment (FME). To address challenges (e.g., calling the model in R and transferring variables between Fortran and R) in developing such a two-language coupling framework, 1) we converted the Fortran-based SWAT model to an R function (R-SWAT) using the RFortran platform, and alternatively 2) we compiled SWAT as a Dynamic Link Library (DLL). We then wrapped SWAT (via R-SWAT) with FME to perform complex applications including parameter identifiability, inverse modeling, and sensitivity and uncertainty analysis in the R environment. The final R-SWAT-FME framework has the following key functionalities: automatic initialization of R, running Fortran-based SWAT and R commands in parallel, transferring parameters and model output between SWAT and R, and inverse modeling with visualization. To examine this framework and demonstrate how it works, a case study simulating streamflow in the Cedar River Basin in Iowa in the United Sates was used, and we compared it with the built-in auto-calibration tool of SWAT in parameter optimization. Results indicate that both methods performed well and similarly in searching a set of optimal parameters. Nonetheless, the R-SWAT-FME is more attractive due to its instant visualization, and potential to take advantage of other R packages (e.g., inverse modeling and statistical graphics). The methods presented in the paper are readily adaptable to other model applications that require capability for automated calibration, and sensitivity and uncertainty analysis.
Simulation-based optimization framework for reuse of agricultural drainage water in irrigation.
Allam, A; Tawfik, A; Yoshimura, C; Fleifle, A
2016-05-01
A simulation-based optimization framework for agricultural drainage water (ADW) reuse has been developed through the integration of a water quality model (QUAL2Kw) and a genetic algorithm. This framework was applied to the Gharbia drain in the Nile Delta, Egypt, in summer and winter 2012. First, the water quantity and quality of the drain was simulated using the QUAL2Kw model. Second, uncertainty analysis and sensitivity analysis based on Monte Carlo simulation were performed to assess QUAL2Kw's performance and to identify the most critical variables for determination of water quality, respectively. Finally, a genetic algorithm was applied to maximize the total reuse quantity from seven reuse locations with the condition not to violate the standards for using mixed water in irrigation. The water quality simulations showed that organic matter concentrations are critical management variables in the Gharbia drain. The uncertainty analysis showed the reliability of QUAL2Kw to simulate water quality and quantity along the drain. Furthermore, the sensitivity analysis showed that the 5-day biochemical oxygen demand, chemical oxygen demand, total dissolved solids, total nitrogen and total phosphorous are highly sensitive to point source flow and quality. Additionally, the optimization results revealed that the reuse quantities of ADW can reach 36.3% and 40.4% of the available ADW in the drain during summer and winter, respectively. These quantities meet 30.8% and 29.1% of the drainage basin requirements for fresh irrigation water in the respective seasons. Copyright © 2016 Elsevier Ltd. All rights reserved.
Eltiti, Stacy; Wallace, Denise; Ridgewell, Anna; Zougkou, Konstantina; Russo, Riccardo; Sepulveda, Francisco; Mirshekar-Syahkal, Dariush; Rasor, Paul; Deeble, Roger; Fox, Elaine
2007-11-01
Individuals with idiopathic environmental illness with attribution to electromagnetic fields (IEI-EMF) believe they suffer negative health effects when exposed to electromagnetic fields from everyday objects such as mobile phone base stations. This study used both open provocation and double-blind tests to determine if sensitive and control individuals experience more negative health effects when exposed to base station-like signals compared with sham. Fifty-six self-reported sensitive and 120 control participants were tested in an open provocation test. Of these, 12 sensitive and 6 controls withdrew after the first session. The remainder completed a series of double-blind tests. Subjective measures of well-being and symptoms as well as physiological measures of blood volume pulse, heart rate, and skin conductance were obtained. During the open provocation, sensitive individuals reported lower levels of well-being in both the global system for mobile communication (GSM) and universal mobile telecommunications system (UMTS) compared with sham exposure, whereas controls reported more symptoms during the UMTS exposure. During double-blind tests the GSM signal did not have any effect on either group. Sensitive participants did report elevated levels of arousal during the UMTS condition, whereas the number or severity of symptoms experienced did not increase. Physiological measures did not differ across the three exposure conditions for either group. Short-term exposure to a typical GSM base station-like signal did not affect well-being or physiological functions in sensitive or control individuals. Sensitive individuals reported elevated levels of arousal when exposed to a UMTS signal. Further analysis, however, indicated that this difference was likely to be due to the effect of order of exposure rather than the exposure itself.
Eltiti, Stacy; Wallace, Denise; Ridgewell, Anna; Zougkou, Konstantina; Russo, Riccardo; Sepulveda, Francisco; Mirshekar-Syahkal, Dariush; Rasor, Paul; Deeble, Roger; Fox, Elaine
2007-01-01
Background Individuals with idiopathic environmental illness with attribution to electromagnetic fields (IEI-EMF) believe they suffer negative health effects when exposed to electromagnetic fields from everyday objects such as mobile phone base stations. Objectives This study used both open provocation and double-blind tests to determine if sensitive and control individuals experience more negative health effects when exposed to base station-like signals compared with sham. Methods Fifty-six self-reported sensitive and 120 control participants were tested in an open provocation test. Of these, 12 sensitive and 6 controls withdrew after the first session. The remainder completed a series of double-blind tests. Subjective measures of well-being and symptoms as well as physiological measures of blood volume pulse, heart rate, and skin conductance were obtained. Results During the open provocation, sensitive individuals reported lower levels of well-being in both the global system for mobile communication (GSM) and universal mobile telecommunications system (UMTS) compared with sham exposure, whereas controls reported more symptoms during the UMTS exposure. During double-blind tests the GSM signal did not have any effect on either group. Sensitive participants did report elevated levels of arousal during the UMTS condition, whereas the number or severity of symptoms experienced did not increase. Physiological measures did not differ across the three exposure conditions for either group. Conclusions Short-term exposure to a typical GSM base station-like signal did not affect well-being or physiological functions in sensitive or control individuals. Sensitive individuals reported elevated levels of arousal when exposed to a UMTS signal. Further analysis, however, indicated that this difference was likely to be due to the effect of order of exposure rather than the exposure itself. PMID:18007992
NASA Astrophysics Data System (ADS)
Kumar, Rajeev; Kushwaha, Angad S.; Srivastava, Monika; Mishra, H.; Srivastava, S. K.
2018-03-01
In the present communication, a highly sensitive surface plasmon resonance (SPR) biosensor with Kretschmann configuration having alternate layers, prism/zinc oxide/silver/gold/graphene/biomolecules (ss-DNA) is presented. The optimization of the proposed configuration has been accomplished by keeping the constant thickness of zinc oxide (32 nm), silver (32 nm), graphene (0.34 nm) layer and biomolecules (100 nm) for different values of gold layer thickness (1, 3 and 5 nm). The sensitivity of the proposed SPR biosensor has been demonstrated for a number of design parameters such as gold layer thickness, number of graphene layer, refractive index of biomolecules and the thickness of biomolecules layer. SPR biosensor with optimized geometry has greater sensitivity (66 deg/RIU) than the conventional (52 deg/RIU) as well as other graphene-based (53.2 deg/RIU) SPR biosensor. The effect of zinc oxide layer thickness on the sensitivity of SPR biosensor has also been analysed. From the analysis, it is found that the sensitivity increases significantly by increasing the thickness of zinc oxide layer. It means zinc oxide intermediate layer plays an important role to improve the sensitivity of the biosensor. The sensitivity of SPR biosensor also increases by increasing the number of graphene layer (upto nine layer).
Financial analysis of technology acquisition using fractionated lasers as a model.
Jutkowitz, Eric; Carniol, Paul J; Carniol, Alan R
2010-08-01
Ablative fractional lasers are among the most advanced and costly devices on the market. Yet, there is a dearth of published literature on the cost and potential return on investment (ROI) of such devices. The objective of this study was to provide a methodological framework for physicians to evaluate ROI. To facilitate this analysis, we conducted a case study on the potential ROI of eight ablative fractional lasers. In the base case analysis, a 5-year lease and a 3-year lease were assumed as the purchase option with a $0 down payment and 3-month payment deferral. In addition to lease payments, service contracts, labor cost, and disposables were included in the total cost estimate. Revenue was estimated as price per procedure multiplied by total number of procedures in a year. Sensitivity analyses were performed to account for variability in model assumptions. Based on the assumptions of the model, all lasers had higher ROI under the 5-year lease agreement compared with that for the 3-year lease agreement. When comparing results between lasers, those with lower operating and purchase cost delivered a higher ROI. Sensitivity analysis indicates the model is most sensitive to purchase method. If physicians opt to purchase the device rather than lease, they can significantly enhance ROI. ROI analysis is an important tool for physicians who are considering making an expensive device acquisition. However, physicians should not rely solely on ROI and must also consider the clinical benefits of a laser. (c) Thieme Medical Publishers.
NASA Astrophysics Data System (ADS)
Prestifilippo, Michele; Scollo, Simona; Tarantola, Stefano
2015-04-01
The uncertainty in volcanic ash forecasts may depend on our knowledge of the model input parameters and our capability to represent the dynamic of an incoming eruption. Forecasts help governments to reduce risks associated with volcanic eruptions and for this reason different kinds of analysis that help to understand the effect that each input parameter has on model outputs are necessary. We present an iterative approach based on the sequential combination of sensitivity analysis, parameter estimation procedure and Monte Carlo-based uncertainty analysis, applied to the lagrangian volcanic ash dispersal model PUFF. We modify the main input parameters as the total mass, the total grain-size distribution, the plume thickness, the shape of the eruption column, the sedimentation models and the diffusion coefficient, perform thousands of simulations and analyze the results. The study is carried out on two different Etna scenarios: the sub-plinian eruption of 22 July 1998 that formed an eruption column rising 12 km above sea level and lasted some minutes and the lava fountain eruption having features similar to the 2011-2013 events that produced eruption column high up to several kilometers above sea level and lasted some hours. Sensitivity analyses and uncertainty estimation results help us to address the measurements that volcanologists should perform during volcanic crisis to reduce the model uncertainty.
Microfluidic-Based Enrichment and Retrieval of Circulating Tumor Cells for RT-PCR Analysis.
Gogoi, Priya; Sepehri, Saedeh; Chow, Will; Handique, Kalyan; Wang, Yixin
2017-01-01
Molecular analysis of circulating tumor cells (CTCs) is hindered by low sensitivity and high level of background leukocytes of currently available CTC enrichment technologies. We have developed a novel device to enrich and retrieve CTCs from blood samples by using a microfluidic chip. The Celsee PREP100 device captures CTCs with high sensitivity and allows the captured CTCs to be retrieved for molecular analysis. It uses the microfluidic chip which has approximately 56,320 capture chambers. Based on differences in cell size and deformability, each chamber ensures that small blood escape while larger CTCs of varying sizes are trapped and isolated in the chambers. In this report, we used the Celsee PREP100 to capture cancer cells spiked into normal donor blood samples. We were able to show that the device can capture as low as 10 cells with high reproducibility. The captured CTCs were retrieved from the microfluidic chip. The cell recovery rate of this back-flow procedure is 100% and the level of remaining background leukocytes is very low (about 300-400 cells). RNA from the retrieved cells are extracted and converted to cDNA, and gene expression analysis of selected cancer markers can be carried out by using RT-PCR assays. The sensitive and easy-to-use Celsee PREP100 system represents a promising technology for capturing and molecular characterization of CTCs.
NASA Technical Reports Server (NTRS)
Li, Jun; Koehne, Jessica; Chen, Hua; Cassell, Alan; Ng, Hou Tee; Ye, Qi; Han, Jie; Meyyappan, M.
2004-01-01
There is a strong need for faster, cheaper, and simpler methods for nucleic acid analysis in today s clinical tests. Nanotechnologies can potentially provide solutions to these requirements by integrating nanomaterials with biofunctionalities. Dramatic improvement in the sensitivity and multiplexing can be achieved through the high-degree miniaturization. Here, we present our study in the development of an ultrasensitive label-free electronic chip for DNA/RNA analysis based on carbon nanotube nanoelectrode arrays. A reliable nanoelectrode array based on vertically aligned multi-walled carbon nanotubes (MWNTs) embedded in a SiO2 matrix is fabricated using a bottom-up approach. Characteristic nanoelectrode behavior is observed with a low-density MWNT nanoelectrode array in measuring both the bulk and surface immobilized redox species. The open-end of MWNTs are found to present similar properties as graphite edge-plane electrodes, with a wide potential window, flexible chemical functionalities, and good biocompatibility. A BRCA1 related oligonucleotide probe with 18 bases is covalently functionalized at the open ends of the MWNTs and specifically hybridized with an oligonucleotide target as well as a PCR amplicon. The guanine bases in the target molecules are employed as the signal moieties for the electrochemical measurements. Ru(bpy)3(2+) mediator is used to further amplify the guanine oxidation signal. This technique has been employed for direct electrochemical detection of label-free PCR amplicon through specific hybridization with the BRCAl probe. The detection limit is estimated to be less than approximately 1000 DNA molecules, approaching the limit of the sensitivity by laser-based fluorescence techniques in DNA microarray. This system provides a general electronic platform for rapid molecular diagnostics in applications requiring ultrahigh sensitivity, high-degree of miniaturization, simple sample preparation, and low- cost operation.
Zeng, Xiaohui; Li, Jianhe; Peng, Liubao; Wang, Yunhua; Tan, Chongqing; Chen, Gannong; Wan, Xiaomin; Lu, Qiong; Yi, Lidan
2014-01-01
Maintenance gefitinib significantly prolonged progression-free survival (PFS) compared with placebo in patients from eastern Asian with locally advanced/metastatic non-small-cell lung cancer (NSCLC) after four chemotherapeutic cycles (21 days per cycle) of first-line platinum-based combination chemotherapy without disease progression. The objective of the current study was to evaluate the cost-effectiveness of maintenance gefitinib therapy after four chemotherapeutic cycle's stand first-line platinum-based chemotherapy for patients with locally advanced or metastatic NSCLC with unknown EGFR mutations, from a Chinese health care system perspective. A semi-Markov model was designed to evaluate cost-effectiveness of the maintenance gefitinib treatment. Two-parametric Weibull and Log-logistic distribution were fitted to PFS and overall survival curves independently. One-way and probabilistic sensitivity analyses were conducted to assess the stability of the model designed. The model base-case analysis suggested that maintenance gefitinib would increase benefits in a 1, 3, 6 or 10-year time horizon, with incremental $184,829, $19,214, $19,328, and $21,308 per quality-adjusted life-year (QALY) gained, respectively. The most sensitive influential variable in the cost-effectiveness analysis was utility of PFS plus rash, followed by utility of PFS plus diarrhoea, utility of progressed disease, price of gefitinib, cost of follow-up treatment in progressed survival state, and utility of PFS on oral therapy. The price of gefitinib is the most significant parameter that could reduce the incremental cost per QALY. Probabilistic sensitivity analysis indicated that the cost-effective probability of maintenance gefitinib was zero under the willingness-to-pay (WTP) threshold of $16,349 (3 × per-capita gross domestic product of China). The sensitivity analyses all suggested that the model was robust. Maintenance gefitinib following first-line platinum-based chemotherapy for patients with locally advanced/metastatic NSCLC with unknown EGFR mutations is not cost-effective. Decreasing the price of gefitinib may be a preferential choice for meeting widely treatment demands in China.
Garway-Heath, David F; Quartilho, Ana; Prah, Philip; Crabb, David P; Cheng, Qian; Zhu, Haogang
2017-08-01
To evaluate the ability of various visual field (VF) analysis methods to discriminate treatment groups in glaucoma clinical trials and establish the value of time-domain optical coherence tomography (TD OCT) imaging as an additional outcome. VFs and retinal nerve fibre layer thickness (RNFLT) measurements (acquired by TD OCT) from 373 glaucoma patients in the UK Glaucoma Treatment Study (UKGTS) at up to 11 scheduled visits over a 2 year interval formed the cohort to assess the sensitivity of progression analysis methods. Specificity was assessed in 78 glaucoma patients with up to 11 repeated VF and OCT RNFLT measurements over a 3 month interval. Growth curve models assessed the difference in VF and RNFLT rate of change between treatment groups. Incident progression was identified by 3 VF-based methods: Guided Progression Analysis (GPA), 'ANSWERS' and 'PoPLR', and one based on VFs and RNFLT: 'sANSWERS'. Sensitivity, specificity and discrimination between treatment groups were evaluated. The rate of VF change was significantly faster in the placebo, compared to active treatment, group (-0.29 vs +0.03 dB/year, P <.001); the rate of RNFLT change was not different (-1.7 vs -1.1 dB/year, P =.14). After 18 months and at 95% specificity, the sensitivity of ANSWERS and PoPLR was similar (35%); sANSWERS achieved a sensitivity of 70%. GPA, ANSWERS and PoPLR discriminated treatment groups with similar statistical significance; sANSWERS did not discriminate treatment groups. Although the VF progression-detection method including VF and RNFLT measurements is more sensitive, it does not improve discrimination between treatment arms.
Population and High-Risk Group Screening for Glaucoma: The Los Angeles Latino Eye Study
Francis, Brian A.; Vigen, Cheryl; Lai, Mei-Ying; Winarko, Jonathan; Nguyen, Betsy; Azen, Stanley
2011-01-01
Purpose. To evaluate the ability of various screening tests, both individually and in combination, to detect glaucoma in the general Latino population and high-risk subgroups. Methods. The Los Angeles Latino Eye Study is a population-based study of eye disease in Latinos 40 years of age and older. Participants (n = 6082) underwent Humphrey visual field testing (HVF), frequency doubling technology (FDT) perimetry, measurement of intraocular pressure (IOP) and central corneal thickness (CCT), and independent assessment of optic nerve vertical cup disc (C/D) ratio. Screening parameters were evaluated for three definitions of glaucoma based on optic disc, visual field, and a combination of both. Analyses were also conducted for high-risk subgroups (family history of glaucoma, diabetes mellitus, and age ≥65 years). Sensitivity, specificity, and receiver operating characteristic curves were calculated for those continuous parameters independently associated with glaucoma. Classification and regression tree (CART) analysis was used to develop a multivariate algorithm for glaucoma screening. Results. Preset cutoffs for screening parameters yielded a generally poor balance of sensitivity and specificity (sensitivity/specificity for IOP ≥21 mm Hg and C/D ≥0.8 was 0.24/0.97 and 0.60/0.98, respectively). Assessment of high-risk subgroups did not improve the sensitivity/specificity of individual screening parameters. A CART analysis using multiple screening parameters—C/D, HVF, and IOP—substantially improved the balance of sensitivity and specificity (sensitivity/specificity 0.92/0.92). Conclusions. No single screening parameter is useful for glaucoma screening. However, a combination of vertical C/D ratio, HVF, and IOP provides the best balance of sensitivity/specificity and is likely to provide the highest yield in glaucoma screening programs. PMID:21245400
Kim, H; Rajagopalan, M S; Beriwal, S; Smith, K J
2017-10-01
Stereotactic radiosurgery (SRS) alone or upfront whole brain radiation therapy (WBRT) plus SRS are the most commonly used treatment options for one to three brain oligometastases. The most recent randomised clinical trial result comparing SRS alone with upfront WBRT plus SRS (NCCTG N0574) has favoured SRS alone for neurocognitive function, whereas treatment options remain controversial in terms of cognitive decline and local control. The aim of this study was to conduct a cost-effectiveness analysis of these two competing treatments. A Markov model was constructed for patients treated with SRS alone or SRS plus upfront WBRT based on largely randomised clinical trials. Costs were based on 2016 Medicare reimbursement. Strategies were compared using the incremental cost-effectiveness ratio (ICER) and effectiveness was measured in quality-adjusted life years (QALYs). One-way and probabilistic sensitivity analyses were carried out. Strategies were evaluated from the healthcare payer's perspective with a willingness-to-pay threshold of $100 000 per QALY gained. In the base case analysis, the median survival was 9 months for both arms. SRS alone resulted in an ICER of $9917 per QALY gained. In one-way sensitivity analyses, results were most sensitive to variation in cognitive decline rates for both groups and median survival rates, but the SRS alone remained cost-effective for most parameter ranges. Based on the current available evidence, SRS alone was found to be cost-effective for patients with one to three brain metastases compared with upfront WBRT plus SRS. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harding, R., E-mail: ruth.harding2@wales.nhs.uk; Trnková, P.; Lomax, A. J.
Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was tomore » benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.« less
Estimation of Handgrip Force from SEMG Based on Wavelet Scale Selection.
Wang, Kai; Zhang, Xianmin; Ota, Jun; Huang, Yanjiang
2018-02-24
This paper proposes a nonlinear correlation-based wavelet scale selection technology to select the effective wavelet scales for the estimation of handgrip force from surface electromyograms (SEMG). The SEMG signal corresponding to gripping force was collected from extensor and flexor forearm muscles during the force-varying analysis task. We performed a computational sensitivity analysis on the initial nonlinear SEMG-handgrip force model. To explore the nonlinear correlation between ten wavelet scales and handgrip force, a large-scale iteration based on the Monte Carlo simulation was conducted. To choose a suitable combination of scales, we proposed a rule to combine wavelet scales based on the sensitivity of each scale and selected the appropriate combination of wavelet scales based on sequence combination analysis (SCA). The results of SCA indicated that the scale combination VI is suitable for estimating force from the extensors and the combination V is suitable for the flexors. The proposed method was compared to two former methods through prolonged static and force-varying contraction tasks. The experiment results showed that the root mean square errors derived by the proposed method for both static and force-varying contraction tasks were less than 20%. The accuracy and robustness of the handgrip force derived by the proposed method is better than that obtained by the former methods.
Tisch, Ulrike; Haick, Hossam
2014-06-01
Profiling the body chemistry by means of volatile organic compounds (VOCs) in the breath opens exciting new avenues in medical diagnostics. Gas sensors could provide ideal platforms for realizing portable, hand-held breath testing devices in the near future. This review summarizes the latest developments and applications in the field of chemical sensors for diagnostic breath testing that were presented at the Breath Analysis Summit 2013 in Wallerfangen, Germany. Considerable progress has been made towards clinically applicable breath testing devices, especially by utilizing chemo-sensitive nanomaterials. Examples of several specialized breath testing applications are presented that are either based on stand-alone nanomaterial-based sensors being highly sensitive and specific to individual breath compounds over others, or on combinations of several highly specific sensors, or on experimental nanomaterial-based sensors arrays. Other interesting approaches include the adaption of a commercially available MOx-based sensor array to indirect breath testing applications, using a sample pre-concentration method, and the development of compact integrated GC-sensor systems. The recent trend towards device integration has led to the development of fully integrated prototypes of point-of-care devices. We describe and compare the performance of several prototypes that are based on different sensing technologies and evaluate their potential as low-cost and readily available next-generation medical devices.
Passiglia, Francesco; Rizzo, Sergio; Rolfo, Christian; Galvano, Antonio; Bronte, Enrico; Incorvaia, Lorena; Listi, Angela; Barraco, Nadia; Castiglia, Marta; Calo, Valentina; Bazan, Viviana; Russo, Antonio
2018-03-08
Recent studies evaluated the diagnostic accuracy of circulating tumor DNA (ctDNA) in the detection of epidermal growth factor receptor (EGFR) mutations from plasma of NSCLC patients, overall showing a high concordance as compared to standard tissue genotyping. However it is less clear if the location of metastatic site may influence the ability to identify EGFR mutations in plasma. This pooled analysis aims to evaluate the association between the metastatic site location and the sensitivity of ctDNA analysis in detecting EGFR mutations in NSCLC patients. Data from all published studies, evaluating the sensitivity of plasma-based EGFR-mutation testing, stratified by metastatic site location (extrathoracic (M1b) vs intrathoracic (M1a)) were collected by searching in PubMed, Cochrane Library, American Society of Clinical Oncology, and World Conference of Lung Cancer, meeting proceedings. Pooled Odds ratio (OR) and 95% confidence intervals (95% CIs) were calculated for the ctDNA analysis sensitivity, according to metastatic site location. A total of ten studies, with 1425 patients, were eligible. Pooled analysis showed that the sensitivity of ctDNA-based EGFR-mutation testing is significantly higher in patients with M1b vs M1a disease (OR: 5.09; 95% CIs: 2.93 - 8.84). A significant association was observed for both EGFR-activating (OR: 4.30, 95% CI: 2.35-7.88) and resistant T790M mutations (OR: 11.89, 95% CI: 1.45-97.22), regardless of the use of digital-PCR (OR: 5.85, 95% CI: 3.56-9.60) or non-digital PCR technologies (OR: 2.96, 95% CI: 2.24-3.91). These data suggest that the location of metastatic sites significantly influences the diagnostic accuracy of ctDNA analysis in detecting EGFR mutations in NSCLC patients. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Maintaining gender sensitivity in the family practice: facilitators and barriers.
Celik, Halime; Lagro-Janssen, Toine; Klinge, Ineke; van der Weijden, Trudy; Widdershoven, Guy
2009-12-01
This study aims to identify the facilitators and barriers perceived by General Practitioners (GPs) to maintain a gender perspective in family practice. Nine semi-structured interviews were conducted among nine pairs of GPs. The data were analysed by means of deductive content analysis using theory-based methods to generate facilitators and barriers to gender sensitivity. Gender sensitivity in family practice can be influenced by several factors which ultimately determine the extent to which a gender sensitive approach is satisfactorily practiced by GPs in the doctor-patient relationship. Gender awareness, repetition and reminders, motivation triggers and professional guidelines were found to facilitate gender sensitivity. On the other hand, lacking skills and routines, scepticism, heavy workload and the timing of implementation were found to be barriers to gender sensitivity. While the potential effect of each factor affecting gender sensitivity in family practice has been elucidated, the effects of the interplay between these factors still need to be determined.
Heidt, Sebastiaan; Haasnoot, Geert W; Claas, Frans H J
2018-05-24
Highly sensitized patients awaiting a renal transplant have a low chance of receiving an organ offer. Defining acceptable antigens and using this information for allocation purposes can vastly enhance transplantation of this subgroup of patients, which is the essence of the Eurotransplant Acceptable Mismatch program. Acceptable antigens can be determined by extensive laboratory testing, as well as on basis of human leukocyte antigen (HLA) epitope analyses. Within the Acceptable Mismatch program, there is no effect of HLA mismatches on long-term graft survival. Furthermore, patients transplanted through the Acceptable Mismatch program have similar long-term graft survival to nonsensitized patients transplanted through regular allocation. Although HLA epitope analysis is already being used for defining acceptable HLA antigens for highly sensitized patients in the Acceptable Mismatch program, increasing knowledge on HLA antibody - epitope interactions will pave the way toward the definition of acceptable epitopes for highly sensitized patients in the future. Allocation based on acceptable antigens can facilitate transplantation of highly sensitized patients with excellent long-term graft survival.
Swider, Paweł; Lewtak, Jan P; Gryko, Daniel T; Danikiewicz, Witold
2013-10-01
The porphyrinoids chemistry is greatly dependent on the data obtained in mass spectrometry. For this reason, it is essential to determine the range of applicability of mass spectrometry ionization methods. In this study, the sensitivity of three different atmospheric pressure ionization techniques, electrospray ionization, atmospheric pressure chemical ionization and atmospheric pressure photoionization, was tested for several porphyrinods and their metallocomplexes. Electrospray ionization method was shown to be the best ionization technique because of its high sensitivity for derivatives of cyanocobalamin, free-base corroles and porphyrins. In the case of metallocorroles and metalloporphyrins, atmospheric pressure photoionization with dopant proved to be the most sensitive ionization method. It was also shown that for relatively acidic compounds, particularly for corroles, the negative ion mode provides better sensitivity than the positive ion mode. The results supply a lot of relevant information on the methodology of porphyrinoids analysis carried out by mass spectrometry. The information can be useful in designing future MS or liquid chromatography-MS experiments. Copyright © 2013 John Wiley & Sons, Ltd.
Genetics and clinical response to warfarin and edoxaban in patients with venous thromboembolism
Vandell, Alexander G; Walker, Joseph; Brown, Karen S; Zhang, George; Lin, Min; Grosso, Michael A; Mercuri, Michele F
2017-01-01
Objective The aim of this study was to investigate whether genetic variants can identify patients with venous thromboembolism (VTE) at an increased risk of bleeding with warfarin. Methods Hokusai-venous thromboembolism (Hokusai VTE), a randomised, multinational, double-blind, non-inferiority trial, evaluated the safety and efficacy of edoxaban versus warfarin in patients with VTE initially treated with heparin. In this subanalysis of Hokusai VTE, patients genotyped for variants in CYP2C9 and VKORC1 genes were divided into three warfarin sensitivity types (normal, sensitive and highly sensitive) based on their genotypes. An exploratory analysis was also conducted comparing normal responders to pooled sensitive responders (ie, sensitive and highly sensitive responders). Results The analysis included 47.7% (3956/8292) of the patients in Hokusai VTE. Among 1978 patients randomised to warfarin, 63.0% (1247) were normal responders, 34.1% (675) were sensitive responders and 2.8% (56) were highly sensitive responders. Compared with normal responders, sensitive and highly sensitive responders had heparin therapy discontinued earlier (p<0.001), had a decreased final weekly warfarin dose (p<0.001), spent more time overanticoagulated (p<0.001) and had an increased bleeding risk with warfarin (sensitive responders HR 1.38 [95% CI 1.11 to 1.71], p=0.0035; highly sensitive responders 1.79 [1.09 to 2.99]; p=0.0252). Conclusion In this study, CYP2C9 and VKORC1 genotypes identified patients with VTE at increased bleeding risk with warfarin. Trial registration number NCT00986154. PMID:28689179
Low cost charged-coupled device (CCD) based detectors for Shiga toxins activity analysis
USDA-ARS?s Scientific Manuscript database
To improve food safety there is a need to develop simple, low-cost sensitive devices for detection of foodborne pathogens and their toxins. We describe a simple and relatively low-cost webcam-based detector which can be used for various optical detection modalities, including fluorescence, chemilumi...
Chirakup, Suphachai; Chaiyakunapruk, Nathorn; Chaikledkeaw, Usa; Pongcharoensuk, Petcharat; Ongphiphadhanakul, Boonsong; Roze, Stephane; Valentine, William J; Palmer, Andrew J
2008-03-01
The national essential drug committee in Thailand suggested that only one of thiazolidinediones be included in hospital formulary but little was know about their cost-effectiveness values. This study aims to determine an incremental cost-effectiveness ratio of pioglitazone 45 mg compared with rosiglitazone 8 mg in uncontrolled type 2 diabetic patients receiving sulfonylureas and metformin in Thailand. A Markov diabetes model (Center for Outcome Research model) was used in this study. Baseline characteristics of patients were based on Thai diabetes registry project. Costs of diabetes were calculated mainly from Buddhachinaraj hospital. Nonspecific mortality rate and transition probabilities of death from renal replacement therapy were obtained from Thai sources. Clinical effectiveness of thiazolidinediones was retrieved from a meta-analysis. All analyses were based on the government hospital policymaker perspective. Both cost and outcomes were discounted with the rate of 3%. Base-case analyses were analyzed as incremental cost per quality-adjusted life year (QALY) gained. A series of sensitive analyses were performed. In base-case analysis, the pioglitazone group had a better clinical outcomes and higher lifetime costs. The incremental cost per QALY gained was 186,246 baht (US$ 5389). The acceptability curves showed that the probability of pioglitazone being cost-effective was 29% at the willingness to pay of one time of Thai gross domestic product per capita (GDP per capita). The effect of pioglitazone on %HbA1c decrease was the most sensitive to the final outcomes. Our findings showed that in type 2 diabetic patients who cannot control their blood glucose under the combination of sulfonylurea and metformin, the use of pioglitazone 45 mg fell in the cost-effective range recommended by World Health Organization (one to three times of GDP per capita) on average, compared to rosiglitazone 8 mg. Nevertheless, based on sensitivity analysis, its probability of being cost-effective was quite low. Hospital policymakers may consider our findings as part of information for the decision-making process.
Wang, Zongrong; Wang, Shan; Zeng, Jifang; Ren, Xiaochen; Chee, Adrian J Y; Yiu, Billy Y S; Chung, Wai Choi; Yang, Yong; Yu, Alfred C H; Roberts, Robert C; Tsang, Anderson C O; Chow, Kwok Wing; Chan, Paddy K L
2016-07-01
A pressure sensor based on irregular microhump patterns has been proposed and developed. The devices show high sensitivity and broad operating pressure regime while comparing with regular micropattern devices. Finite element analysis (FEA) is utilized to confirm the sensing mechanism and predict the performance of the pressure sensor based on the microhump structures. Silicon carbide sandpaper is employed as the mold to develop polydimethylsiloxane (PDMS) microhump patterns with various sizes. The active layer of the piezoresistive pressure sensor is developed by spin coating PSS on top of the patterned PDMS. The devices show an averaged sensitivity as high as 851 kPa(-1) , broad operating pressure range (20 kPa), low operating power (100 nW), and fast response speed (6.7 kHz). Owing to their flexible properties, the devices are applied to human body motion sensing and radial artery pulse. These flexible high sensitivity devices show great potential in the next generation of smart sensors for robotics, real-time health monitoring, and biomedical applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Automatic burst detection for the EEG of the preterm infant.
Jennekens, Ward; Ruijs, Loes S; Lommen, Charlotte M L; Niemarkt, Hendrik J; Pasman, Jaco W; van Kranen-Mastenbroek, Vivianne H J M; Wijn, Pieter F F; van Pul, Carola; Andriessen, Peter
2011-10-01
To aid with prognosis and stratification of clinical treatment for preterm infants, a method for automated detection of bursts, interburst-intervals (IBIs) and continuous patterns in the electroencephalogram (EEG) is developed. Results are evaluated for preterm infants with normal neurological follow-up at 2 years. The detection algorithm (MATLAB®) for burst, IBI and continuous pattern is based on selection by amplitude, time span, number of channels and numbers of active electrodes. Annotations of two neurophysiologists were used to determine threshold values. The training set consisted of EEG recordings of four preterm infants with postmenstrual age (PMA, gestational age + postnatal age) of 29-34 weeks. Optimal threshold values were based on overall highest sensitivity. For evaluation, both observers verified detections in an independent dataset of four EEG recordings with comparable PMA. Algorithm performance was assessed by calculation of sensitivity and positive predictive value. The results of algorithm evaluation are as follows: sensitivity values of 90% ± 6%, 80% ± 9% and 97% ± 5% for burst, IBI and continuous patterns, respectively. Corresponding positive predictive values were 88% ± 8%, 96% ± 3% and 85% ± 15%, respectively. In conclusion, the algorithm showed high sensitivity and positive predictive values for bursts, IBIs and continuous patterns in preterm EEG. Computer-assisted analysis of EEG may allow objective and reproducible analysis for clinical treatment.
Shape optimization of three-dimensional stamped and solid automotive components
NASA Technical Reports Server (NTRS)
Botkin, M. E.; Yang, R.-J.; Bennett, J. A.
1987-01-01
The shape optimization of realistic, 3-D automotive components is discussed. The integration of the major parts of the total process: modeling, mesh generation, finite element and sensitivity analysis, and optimization are stressed. Stamped components and solid components are treated separately. For stamped parts a highly automated capability was developed. The problem description is based upon a parameterized boundary design element concept for the definition of the geometry. Automatic triangulation and adaptive mesh refinement are used to provide an automated analysis capability which requires only boundary data and takes into account sensitivity of the solution accuracy to boundary shape. For solid components a general extension of the 2-D boundary design element concept has not been achieved. In this case, the parameterized surface shape is provided using a generic modeling concept based upon isoparametric mapping patches which also serves as the mesh generator. Emphasis is placed upon the coupling of optimization with a commercially available finite element program. To do this it is necessary to modularize the program architecture and obtain shape design sensitivities using the material derivative approach so that only boundary solution data is needed.
NASA Astrophysics Data System (ADS)
Starodub, Nickolaj F.; Starodub, Valentyna M.; Krivenchuk, Vladimir E.; Shapovalenko, Valentyna F.
2002-02-01
New type of the multi-immune sensor was elaborated. It is based on electrolyte-insulator-semiconductors structures and intended for determination of such herbicides as simazine, atrazine and 2,4-D. The specific antibodies were immobilized on nitrocellulose disks, which were placed in measuring cells. The analysis was fulfilled by sequential saturation of antibodies, left unbound after their exposure to native herbicide in investigated sample, with labelled herbicide. If horse radish peroxidase (HRP) was used as label the sensitivity of this multi-immune sensor was about 5 and 1.25 (mu) g/L for simazine and 2,4-D, respectively. At the changing of HRP by (beta) -glucose oxidase the sensitivity of analysis of these herbicides increased approximately in 5 times. The linear plots of the registered concentrations were in the range of 1,0-150,0 and 0,25-150,0 ng/mL for simazine and 2,4-D respectively. It was recommended to use the developed immune sensor for wide screening of herbicides in environment. The ways for increasing of its sensitivity were proposed.
Xu, Li; Jiang, Yong; Qiu, Rong
2018-01-01
In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Önal, Orkun; Ozmenci, Cemre; Canadinc, Demircan
2014-09-01
A multi-scale modeling approach was applied to predict the impact response of a strain rate sensitive high-manganese austenitic steel. The roles of texture, geometry and strain rate sensitivity were successfully taken into account all at once by coupling crystal plasticity and finite element (FE) analysis. Specifically, crystal plasticity was utilized to obtain the multi-axial flow rule at different strain rates based on the experimental deformation response under uniaxial tensile loading. The equivalent stress - equivalent strain response was then incorporated into the FE model for the sake of a more representative hardening rule under impact loading. The current results demonstrate that reliable predictions can be obtained by proper coupling of crystal plasticity and FE analysis even if the experimental flow rule of the material is acquired under uniaxial loading and at moderate strain rates that are significantly slower than those attained during impact loading. Furthermore, the current findings also demonstrate the need for an experiment-based multi-scale modeling approach for the sake of reliable predictions of the impact response.
Feizizadeh, Bakhtiar; Blaschke, Thomas
2014-03-04
GIS-based multicriteria decision analysis (MCDA) methods are increasingly being used in landslide susceptibility mapping. However, the uncertainties that are associated with MCDA techniques may significantly impact the results. This may sometimes lead to inaccurate outcomes and undesirable consequences. This article introduces a new GIS-based MCDA approach. We illustrate the consequences of applying different MCDA methods within a decision-making process through uncertainty analysis. Three GIS-MCDA methods in conjunction with Monte Carlo simulation (MCS) and Dempster-Shafer theory are analyzed for landslide susceptibility mapping (LSM) in the Urmia lake basin in Iran, which is highly susceptible to landslide hazards. The methodology comprises three stages. First, the LSM criteria are ranked and a sensitivity analysis is implemented to simulate error propagation based on the MCS. The resulting weights are expressed through probability density functions. Accordingly, within the second stage, three MCDA methods, namely analytical hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA), are used to produce the landslide susceptibility maps. In the third stage, accuracy assessments are carried out and the uncertainties of the different results are measured. We compare the accuracies of the three MCDA methods based on (1) the Dempster-Shafer theory and (2) a validation of the results using an inventory of known landslides and their respective coverage based on object-based image analysis of IRS-ID satellite images. The results of this study reveal that through the integration of GIS and MCDA models, it is possible to identify strategies for choosing an appropriate method for LSM. Furthermore, our findings indicate that the integration of MCDA and MCS can significantly improve the accuracy of the results. In LSM, the AHP method performed best, while the OWA reveals better performance in the reliability assessment. The WLC operation yielded poor results.
Local sensitivity analysis for inverse problems solved by singular value decomposition
Hill, M.C.; Nolan, B.T.
2010-01-01
Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.
Smart zwitterionic membranes with on/off behavior for protein transport.
Su, Yanlei; Zheng, Lili; Li, Chao; Jiang, Zhongyi
2008-09-25
Poly(acrylonitrile) (PAN)-based zwitterionic membranes, composed of PAN and poly( N, N-dimethyl- N-methacryloxyethyl- N-(3-sulfopropyl) copolymer, are electrolyte-sensitive smart membranes. The hydrophilicity was increased and protein adsorption was remarkably decreased for the membranes in response to environmental stimuli. FTIR spectroscopic analysis directly provided molecular-level observation of the enhanced dissociation and hydration of zwitterionic sulfobetaine dipoles at higher electrolyte concentrations. The smart PAN-based zwitterionic membranes can close or open channels for protein transport under different NaCl concentrations. The electrolyte-sensitive switch of on/off behavior for protein transport is reversible.
Zhang, Yu; Qi, Fuyuan; Li, Ying; Zhou, Xin; Sun, Hongfeng; Zhang, Wei; Liu, Daliang; Song, Xi-Ming
2017-07-15
We report a novel graphene oxide quantum dot (GOQD)-sensitized porous TiO 2 microsphere for efficient photoelectric conversion. Electro-chemical analysis along with the Mott-Schottky equation reveals conductivity type and energy band structure of the two semiconductors. Based on their energy band structures, visible light-induced electrons can transfer from the p-type GOQD to the n-type TiO 2 . Enhanced photocurrent and photocatalytic activity in visible light further confirm the enhanced separation of electrons and holes in the nanocomposite. Copyright © 2017 Elsevier Inc. All rights reserved.
Wu, Chuang; Tse, Ming-Leung Vincent; Liu, Zhengyong; Guan, Bai-Ou; Lu, Chao; Tam, Hwa-Yaw
2013-09-01
We propose and demonstrate a highly sensitive in-line photonic crystal fiber (PCF) microfluidic refractometer. Ultrathin C-shaped fibers are spliced in-between the PCF and standard single-mode fibers. The C-shaped fibers provide openings for liquid to flow in and out of the PCF. Based on a Sagnac interferometer, the refractive index (RI) response of the device is investigated theoretically and experimentally. A high sensitivity of 6621 nm/RIU for liquid RI from 1.330 to 1.333 is achieved in the experiment, which agrees well with the theoretical analysis.
Improved Extreme Learning Machine based on the Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Cui, Licheng; Zhai, Huawei; Wang, Benchao; Qu, Zengtang
2018-03-01
Extreme learning machine and its improved ones is weak in some points, such as computing complex, learning error and so on. After deeply analyzing, referencing the importance of hidden nodes in SVM, an novel analyzing method of the sensitivity is proposed which meets people’s cognitive habits. Based on these, an improved ELM is proposed, it could remove hidden nodes before meeting the learning error, and it can efficiently manage the number of hidden nodes, so as to improve the its performance. After comparing tests, it is better in learning time, accuracy and so on.
Hill, Ryan M; Oosterhoff, Benjamin; Kaplow, Julie B
2017-07-01
Although a large number of risk markers for suicide ideation have been identified, little guidance has been provided to prospectively identify adolescents at risk for suicide ideation within community settings. The current study addressed this gap in the literature by utilizing classification tree analysis (CTA) to provide a decision-making model for screening adolescents at risk for suicide ideation. Participants were N = 4,799 youth (Mage = 16.15 years, SD = 1.63) who completed both Waves 1 and 2 of the National Longitudinal Study of Adolescent to Adult Health. CTA was used to generate a series of decision rules for identifying adolescents at risk for reporting suicide ideation at Wave 2. Findings revealed 3 distinct solutions with varying sensitivity and specificity for identifying adolescents who reported suicide ideation. Sensitivity of the classification trees ranged from 44.6% to 77.6%. The tree with greatest specificity and lowest sensitivity was based on a history of suicide ideation. The tree with moderate sensitivity and high specificity was based on depressive symptoms, suicide attempts or suicide among family and friends, and social support. The most sensitive but least specific tree utilized these factors and gender, ethnicity, hours of sleep, school-related factors, and future orientation. These classification trees offer community organizations options for instituting large-scale screenings for suicide ideation risk depending on the available resources and modality of services to be provided. This study provides a theoretically and empirically driven model for prospectively identifying adolescents at risk for suicide ideation and has implications for preventive interventions among at-risk youth. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Takenouchi, Osamu; Fukui, Shiho; Okamoto, Kenji; Kurotani, Satoru; Imai, Noriyasu; Fujishiro, Miyuki; Kyotani, Daiki; Kato, Yoshinao; Kasahara, Toshihiko; Fujita, Masaharu; Toyoda, Akemi; Sekiya, Daisuke; Watanabe, Shinichi; Seto, Hirokazu; Hirota, Morihiko; Ashikaga, Takao; Miyazawa, Masaaki
2015-11-01
To develop a testing strategy incorporating the human cell line activation test (h-CLAT), direct peptide reactivity assay (DPRA) and DEREK, we created an expanded data set of 139 chemicals (102 sensitizers and 37 non-sensitizers) by combining the existing data set of 101 chemicals through the collaborative projects of Japan Cosmetic Industry Association. Of the additional 38 chemicals, 15 chemicals with relatively low water solubility (log Kow > 3.5) were selected to clarify the limitation of testing strategies regarding the lipophilic chemicals. Predictivities of the h-CLAT, DPRA and DEREK, and the combinations thereof were evaluated by comparison to results of the local lymph node assay. When evaluating 139 chemicals using combinations of three methods based on integrated testing strategy (ITS) concept (ITS-based test battery) and a sequential testing strategy (STS) weighing the predictive performance of the h-CLAT and DPRA, overall similar predictivities were found as before on the 101 chemical data set. An analysis of false negative chemicals suggested a major limitation of our strategies was the testing of low water-soluble chemicals. When excluded the negative results for chemicals with log Kow > 3.5, the sensitivity and accuracy of ITS improved to 97% (91 of 94 chemicals) and 89% (114 of 128). Likewise, the sensitivity and accuracy of STS to 98% (92 of 94) and 85% (111 of 129). Moreover, the ITS and STS also showed good correlation with local lymph node assay on three potency classifications, yielding accuracies of 74% (ITS) and 73% (STS). Thus, the inclusion of log Kow in analysis could give both strategies a higher predictive performance. Copyright © 2015 John Wiley & Sons, Ltd.
Cost-effectiveness of prucalopride in the treatment of chronic constipation in the Netherlands
Nuijten, Mark J. C.; Dubois, Dominique J.; Joseph, Alain; Annemans, Lieven
2015-01-01
Objective: To assess the cost-effectiveness of prucalopride vs. continued laxative treatment for chronic constipation in patients in the Netherlands in whom laxatives have failed to provide adequate relief. Methods: A Markov model was developed to estimate the cost-effectiveness of prucalopride in patients with chronic constipation receiving standard laxative treatment from the perspective of Dutch payers in 2011. Data sources included published prucalopride clinical trials, published Dutch price/tariff lists, and national population statistics. The model simulated the clinical and economic outcomes associated with prucalopride vs. standard treatment and had a cycle length of 1 month and a follow-up time of 1 year. Response to treatment was defined as the proportion of patients who achieved “normal bowel function”. One-way and probabilistic sensitivity analyses were conducted to test the robustness of the base case. Results: In the base case analysis, the cost of prucalopride relative to continued laxative treatment was € 9015 per quality-adjusted life-year (QALY). Extensive sensitivity analyses and scenario analyses confirmed that the base case cost-effectiveness estimate was robust. One-way sensitivity analyses showed that the model was most sensitive in response to prucalopride; incremental cost-effectiveness ratios ranged from € 6475 to 15,380 per QALY. Probabilistic sensitivity analyses indicated that there is a greater than 80% probability that prucalopride would be cost-effective compared with continued standard treatment, assuming a willingness-to-pay threshold of € 20,000 per QALY from a Dutch societal perspective. A scenario analysis was performed for women only, which resulted in a cost-effectiveness ratio of € 7773 per QALY. Conclusion: Prucalopride was cost-effective in a Dutch patient population, as well as in a women-only subgroup, who had chronic constipation and who obtained inadequate relief from laxatives. PMID:25926794
NASA Astrophysics Data System (ADS)
Wang, L.-P.; Ochoa-Rodríguez, S.; Onof, C.; Willems, P.
2015-09-01
Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field) that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive) technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2) (Edinburgh, UK) during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban drainage system's dynamics, particularly of peak runoff flows.
Herring, William; Pearson, Isobel; Purser, Molly; Nakhaipour, Hamid Reza; Haiderali, Amin; Wolowacz, Sorrel; Jayasundara, Kavisha
2016-01-01
Our objective was to estimate the cost effectiveness of ofatumumab plus chlorambucil (OChl) versus chlorambucil in patients with chronic lymphocytic leukaemia for whom fludarabine-based therapies are considered inappropriate from the perspective of the publicly funded healthcare system in Canada. A semi-Markov model (3-month cycle length) used survival curves to govern progression-free survival (PFS) and overall survival (OS). Efficacy and safety data and health-state utility values were estimated from the COMPLEMENT-1 trial. Post-progression treatment patterns were based on clinical guidelines, Canadian treatment practices and published literature. Total and incremental expected lifetime costs (in Canadian dollars [$Can], year 2013 values), life-years and quality-adjusted life-years (QALYs) were computed. Uncertainty was assessed via deterministic and probabilistic sensitivity analyses. The discounted lifetime health and economic outcomes estimated by the model showed that, compared with chlorambucil, first-line treatment with OChl led to an increase in QALYs (0.41) and total costs ($Can27,866) and to an incremental cost-effectiveness ratio (ICER) of $Can68,647 per QALY gained. In deterministic sensitivity analyses, the ICER was most sensitive to the modelling time horizon and to the extrapolation of OS treatment effects beyond the trial duration. In probabilistic sensitivity analysis, the probability of cost effectiveness at a willingness-to-pay threshold of $Can100,000 per QALY gained was 59 %. Base-case results indicated that improved overall response and PFS for OChl compared with chlorambucil translated to improved quality-adjusted life expectancy. Sensitivity analysis suggested that OChl is likely to be cost effective subject to uncertainty associated with the presence of any long-term OS benefit and the model time horizon.
Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities
NASA Astrophysics Data System (ADS)
Esposito, Gaetano
Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and identifying sources of uncertainty affecting relevant reaction pathways are usually addressed by resorting to Global Sensitivity Analysis (GSA) techniques. In particular, the most sensitive reactions controlling combustion phenomena are first identified using the Morris Method and then analyzed under the Random Sampling -- High Dimensional Model Representation (RS-HDMR) framework. The HDMR decomposition shows that 10% of the variance seen in the extinction strain rate of non-premixed flames is due to second-order effects between parameters, whereas the maximum concentration of acetylene, a key soot precursor, is affected by mostly only first-order contributions. Moreover, the analysis of the global sensitivity indices demonstrates that improving the accuracy of the reaction rates including the vinyl radical, C2H3, can drastically reduce the uncertainty of predicting targeted flame properties. Finally, the back-propagation of the experimental uncertainty of the extinction strain rate to the parameter space is also performed. This exercise, achieved by recycling the numerical solutions of the RS-HDMR, shows that some regions of the parameter space have a high probability of reproducing the experimental value of the extinction strain rate between its own uncertainty bounds. Therefore this study demonstrates that the uncertainty analysis of bulk flame properties can effectively provide information on relevant chemical reactions.
Receiver operating characteristic analysis of age-related changes in lineup performance.
Humphries, Joyce E; Flowe, Heather D
2015-04-01
In the basic face memory literature, support has been found for the late maturation hypothesis, which holds that face recognition ability is not fully developed until at least adolescence. Support for the late maturation hypothesis in the criminal lineup identification literature, however, has been equivocal because of the analytic approach that has been used to examine age-related changes in identification performance. Recently, receiver operator characteristic (ROC) analysis was applied for the first time in the adult eyewitness memory literature to examine whether memory sensitivity differs across different types of lineup tests. ROC analysis allows for the separation of memory sensitivity from response bias in the analysis of recognition data. Here, we have made the first ROC-based comparison of adults' and children's (5- and 6-year-olds and 9- and 10-year-olds) memory performance on lineups by reanalyzing data from Humphries, Holliday, and Flowe (2012). In line with the late maturation hypothesis, memory sensitivity was significantly greater for adults compared with young children. Memory sensitivity for older children was similar to that for adults. The results indicate that the late maturation hypothesis can be generalized to account for age-related performance differences on an eyewitness memory task. The implications for developmental eyewitness memory research are discussed. Copyright © 2014 Elsevier Inc. All rights reserved.
Xie, Feng; O'Reilly, Daria; Ferrusi, Ilia L; Blackhouse, Gord; Bowen, James M; Tarride, Jean-Eric; Goeree, Ron
2009-05-01
The aim of this paper is to present an economic evaluation of diagnostic technologies using Helicobacter pylori screening strategies for the prevention of gastric cancer as an illustration. A Markov model was constructed to compare the lifetime cost and effectiveness of 4 potential strategies: no screening, the serology test by enzyme-linked immunosorbent assay (ELISA), the stool antigen test (SAT), and the (13)C-urea breath test (UBT) for the detection of H. pylori among a hypothetical cohort of 10,000 Canadian men aged 35 years. Special parameter consideration included the sensitivity and specificity of each screening strategy, which determined the model structure and treatment regimen. The primary outcome measured was the incremental cost-effectiveness ratio between the screening strategies and the no-screening strategy. Base-case analysis and probabilistic sensitivity analysis were performed using the point estimates of the parameters and Monte Carlo simulations, respectively. Compared with the no-screening strategy in the base-case analysis, the incremental cost-effectiveness ratio was $33,000 per quality-adjusted life-year (QALY) for the ELISA, $29,800 per QALY for the SAT, and $50,400 per QALY for the UBT. The probabilistic sensitivity analysis revealed that the no-screening strategy was more cost effective if the willingness to pay (WTP) was <$20,000 per QALY, while the SAT had the highest probability of being cost effective if the WTP was >$30,000 per QALY. Both the ELISA and the UBT were not cost-effective strategies over a wide range of WTP values. Although the UBT had the highest sensitivity and specificity, either no screening or the SAT could be the most cost-effective strategy depending on the WTP threshold values from an economic perspective. This highlights the importance of economic evaluations of diagnostic technologies.
NASA Technical Reports Server (NTRS)
Baker, John; Thorpe, Ira
2012-01-01
Thoroughly studied classic space-based gravitational-wave missions concepts such as the Laser Interferometer Space Antenna (LISA) are based on laser-interferometry techniques. Ongoing developments in atom-interferometry techniques have spurred recently proposed alternative mission concepts. These different approaches can be understood on a common footing. We present an comparative analysis of how each type of instrument responds to some of the noise sources which may limiting gravitational-wave mission concepts. Sensitivity to laser frequency instability is essentially the same for either approach. Spacecraft acceleration reference stability sensitivities are different, allowing smaller spacecraft separations in the atom interferometry approach, but acceleration noise requirements are nonetheless similar. Each approach has distinct additional measurement noise issues.
Shape optimization using a NURBS-based interface-enriched generalized FEM
Najafi, Ahmad R.; Safdari, Masoud; Tortorelli, Daniel A.; ...
2016-11-26
This study presents a gradient-based shape optimization over a fixed mesh using a non-uniform rational B-splines-based interface-enriched generalized finite element method, applicable to multi-material structures. In the proposed method, non-uniform rational B-splines are used to parameterize the design geometry precisely and compactly by a small number of design variables. An analytical shape sensitivity analysis is developed to compute derivatives of the objective and constraint functions with respect to the design variables. Subtle but important new terms involve the sensitivity of shape functions and their spatial derivatives. As a result, verification and illustrative problems are solved to demonstrate the precision andmore » capability of the method.« less
NASA Astrophysics Data System (ADS)
Brill, Nicolai; Wirtz, Mathias; Merhof, Dorit; Tingart, Markus; Jahr, Holger; Truhn, Daniel; Schmitt, Robert; Nebelung, Sven
2016-07-01
Polarization-sensitive optical coherence tomography (PS-OCT) is a light-based, high-resolution, real-time, noninvasive, and nondestructive imaging modality yielding quasimicroscopic cross-sectional images of cartilage. As yet, comprehensive parameterization and quantification of birefringence and tissue properties have not been performed on human cartilage. PS-OCT and algorithm-based image analysis were used to objectively grade human cartilage degeneration in terms of surface irregularity, tissue homogeneity, signal attenuation, as well as birefringence coefficient and band width, height, depth, and number. Degeneration-dependent changes were noted for the former three parameters exclusively, thereby questioning the diagnostic value of PS-OCT in the assessment of human cartilage degeneration.
NASA Astrophysics Data System (ADS)
Vanrolleghem, Peter A.; Mannina, Giorgio; Cosenza, Alida; Neumann, Marc B.
2015-03-01
Sensitivity analysis represents an important step in improving the understanding and use of environmental models. Indeed, by means of global sensitivity analysis (GSA), modellers may identify both important (factor prioritisation) and non-influential (factor fixing) model factors. No general rule has yet been defined for verifying the convergence of the GSA methods. In order to fill this gap this paper presents a convergence analysis of three widely used GSA methods (SRC, Extended FAST and Morris screening) for an urban drainage stormwater quality-quantity model. After the convergence was achieved the results of each method were compared. In particular, a discussion on peculiarities, applicability, and reliability of the three methods is presented. Moreover, a graphical Venn diagram based classification scheme and a precise terminology for better identifying important, interacting and non-influential factors for each method is proposed. In terms of convergence, it was shown that sensitivity indices related to factors of the quantity model achieve convergence faster. Results for the Morris screening method deviated considerably from the other methods. Factors related to the quality model require a much higher number of simulations than the number suggested in literature for achieving convergence with this method. In fact, the results have shown that the term "screening" is improperly used as the method may exclude important factors from further analysis. Moreover, for the presented application the convergence analysis shows more stable sensitivity coefficients for the Extended-FAST method compared to SRC and Morris screening. Substantial agreement in terms of factor fixing was found between the Morris screening and Extended FAST methods. In general, the water quality related factors exhibited more important interactions than factors related to water quantity. Furthermore, in contrast to water quantity model outputs, water quality model outputs were found to be characterised by high non-linearity.
Liu, Ting; He, Xiang-ge
2006-05-01
To evaluate the overall diagnostic capabilities of frequency-doubling technology (FDT) in patients of primary glaucoma, with standard automated perimetry (SAP) and/or optic disc appearance as the gold standard. A comprehensive electric search in MEDLINE, EMBASE, Cochrane Library, BIOSIS, Previews, HMIC, IPA, OVID, CNKI, CBMdisc, VIP information, CMCC, CCPD, SSreader and 21dmedia and a manual search in related textbooks, journals, congress articles and their references were performed to identify relevant English and Chinese language articles. Criteria for adaptability were established according to validity criteria for diagnostic research published by the Cochrane Methods Group on Screening and Diagnostic Tests. Quality of the included articles was assessed and relevant materials were extracted for studying. Statistical analysis was performed with Meta Test version 0.6 software. Heterogeneity of the included articles was tested, which was used to select appropriate effect model to calculate pooled weighted sensitivity and specificity. Summary Receiver Operating Characteristic (SROC) curve was established and the area under the curve (AUC) was calculated. Finally, sensitivity analysis was performed. Fifteen English articles (21 studies) of 206 retrieved articles were included in the present study, with a total of 3172 patients. The reported sensitivity of FDT ranged from 0.51 to 1.00, and specificity from 0.58 to 1.00. The pooled weighted sensitivity and specificity for FDT with 95% confidence intervals (95% CI) after correction for standard error were 0.86 (0.80 - 0.90), 0.87 (0.81 - 0.91), respectively. The AUC of SROC was 93.01%. Sensitivity analysis demonstrated no disproportionate influences of individual study. The included articles are of good quality and FDT can be a highly efficient diagnostic test for primary glaucoma based on Meta-analysis. However, a high quality perspective study is still required for further analysis.
Recent approaches for enhancing sensitivity in enantioseparations by CE.
Sánchez-Hernández, Laura; García-Ruiz, Carmen; Luisa Marina, María; Luis Crego, Antonio
2010-01-01
This article reviews the latest methodological and instrumental improvements for enhancing sensitivity in chiral analysis by CE. The review covers literature from March 2007 until May 2009, that is, the works published after the appearance of the latest review article on the same topic by Sánchez-Hernández et al. [Electrophoresis 2008, 29, 237-251]. Off-line and on-line sample treatment techniques, on-line sample preconcentration strategies based on electrophoretic and chromatographic principles, and alternative detection systems to the widely employed UV/Vis detection in CE are the most relevant approaches discussed for improving sensitivity. Microchip technologies are also included since they can open up great possibilities to achieve sensitive and fast enantiomeric separations.
Physically-based modelling of high magnitude torrent events with uncertainty quantification
NASA Astrophysics Data System (ADS)
Wing-Yuen Chow, Candace; Ramirez, Jorge; Zimmermann, Markus; Keiler, Margreth
2017-04-01
High magnitude torrent events are associated with the rapid propagation of vast quantities of water and available sediment downslope where human settlements may be established. Assessing the vulnerability of built structures to these events is a part of consequence analysis, where hazard intensity is related to the degree of loss sustained. The specific contribution of the presented work describes a procedure simulate these damaging events by applying physically-based modelling and to include uncertainty information about the simulated results. This is a first step in the development of vulnerability curves based on several intensity parameters (i.e. maximum velocity, sediment deposition depth and impact pressure). The investigation process begins with the collection, organization and interpretation of detailed post-event documentation and photograph-based observation data of affected structures in three sites that exemplify the impact of highly destructive mudflows and flood occurrences on settlements in Switzerland. Hazard intensity proxies are then simulated with the physically-based FLO-2D model (O'Brien et al., 1993). Prior to modelling, global sensitivity analysis is conducted to support a better understanding of model behaviour, parameterization and the quantification of uncertainties (Song et al., 2015). The inclusion of information describing the degree of confidence in the simulated results supports the credibility of vulnerability curves developed with the modelled data. First, key parameters are identified and selected based on literature review. Truncated a priori ranges of parameter values were then defined by expert solicitation. Local sensitivity analysis is performed based on manual calibration to provide an understanding of the parameters relevant to the case studies of interest. Finally, automated parameter estimation is performed to comprehensively search for optimal parameter combinations and associated values, which are evaluated using the observed data collected in the first stage of the investigation. O'Brien, J.S., Julien, P.Y., Fullerton, W. T., 1993. Two-dimensional water flood and mudflow simulation. Journal of Hydraulic Engineering 119(2): 244-261. Song, X., Zhang, J., Zhan, C., Xuan, Y., Ye, M., Xu C., 2015. Global sensitivity analysis in hydrological modeling: Review of concepts, methods, theoretical frameworks, Journal of Hydrology 523: 739-757.
Pulsed quantum cascade laser-based cavity ring-down spectroscopy for ammonia detection in breath.
Manne, Jagadeeshwari; Sukhorukov, Oleksandr; Jäger, Wolfgang; Tulip, John
2006-12-20
Breath analysis can be a valuable, noninvasive tool for the clinical diagnosis of a number of pathological conditions. The detection of ammonia in exhaled breath is of particular interest for it has been linked to kidney malfunction and peptic ulcers. Pulsed cavity ringdown spectroscopy in the mid-IR region has developed into a sensitive analytical technique for trace gas analysis. A gas analyzer based on a pulsed mid-IR quantum cascade laser operating near 970 cm(-1) has been developed for the detection of ammonia levels in breath. We report a sensitivity of approximately 50 parts per billion with a 20 s time resolution for ammonia detection in breath with this system. The challenges and possible solutions for the quantification of ammonia in human breath by the described technique are discussed.
Dannhauer, Torben; Sattler, Martina; Wirth, Wolfgang; Hunter, David J; Kwoh, C Kent; Eckstein, Felix
2014-08-01
Biomechanical measurement of muscle strength represents established technology in evaluating limb function. Yet, analysis of longitudinal change suffers from relatively large between-measurement variability. Here, we determine the sensitivity to change of magnetic resonance imaging (MRI)-based measurement of thigh muscle anatomical cross sectional areas (ACSAs) versus isometric strength in limbs with and without structural progressive knee osteoarthritis (KOA), with focus on the quadriceps. Of 625 "Osteoarthritis Initiative" participants with radiographic KOA, 20 had MRI cartilage and radiographic joint space width loss in the right knee isometric muscle strength measurement and axial T1-weighted spin-echo acquisitions of the thigh. Muscle ACSAs were determined from manual segmentation at 33% femoral length (distal to proximal). In progressor knees, the reduction in quadriceps ACSA between baseline and 2-year follow-up was -2.8 ± 7.9 % (standardized response mean [SRM] = -0.35), and it was -1.8 ± 6.8% (SRM = -0.26) in matched, non-progressive KOA controls. The decline in extensor strength was more variable than that in ACSAs, both in progressors (-3.9 ± 20%; SRM = -0.20) and in non-progressive controls (-4.5 ± 28%; SRM = -0.16). MRI-based analysis of quadriceps muscles ACSAs appears to be more sensitive to longitudinal change than isometric extensor strength and is suggestive of greater loss in limbs with structurally progressive KOA than in non-progressive controls.
Parameter estimation and sensitivity analysis in an agent-based model of Leishmania major infection
Jones, Douglas E.; Dorman, Karin S.
2009-01-01
Computer models of disease take a systems biology approach toward understanding host-pathogen interactions. In particular, data driven computer model calibration is the basis for inference of immunological and pathogen parameters, assessment of model validity, and comparison between alternative models of immune or pathogen behavior. In this paper we describe the calibration and analysis of an agent-based model of Leishmania major infection. A model of macrophage loss following uptake of necrotic tissue is proposed to explain macrophage depletion following peak infection. Using Gaussian processes to approximate the computer code, we perform a sensitivity analysis to identify important parameters and to characterize their influence on the simulated infection. The analysis indicates that increasing growth rate can favor or suppress pathogen loads, depending on the infection stage and the pathogen’s ability to avoid detection. Subsequent calibration of the model against previously published biological observations suggests that L. major has a relatively slow growth rate and can replicate for an extended period of time before damaging the host cell. PMID:19837088
Evaluation of Contamination Inspection and Analysis Methods through Modeling System Performance
NASA Technical Reports Server (NTRS)
Seasly, Elaine; Dever, Jason; Stuban, Steven M. F.
2016-01-01
Contamination is usually identified as a risk on the risk register for sensitive space systems hardware. Despite detailed, time-consuming, and costly contamination control efforts during assembly, integration, and test of space systems, contaminants are still found during visual inspections of hardware. Improved methods are needed to gather information during systems integration to catch potential contamination issues earlier and manage contamination risks better. This research explores evaluation of contamination inspection and analysis methods to determine optical system sensitivity to minimum detectable molecular contamination levels based on IEST-STD-CC1246E non-volatile residue (NVR) cleanliness levels. Potential future degradation of the system is modeled given chosen modules representative of optical elements in an optical system, minimum detectable molecular contamination levels for a chosen inspection and analysis method, and determining the effect of contamination on the system. By modeling system performance based on when molecular contamination is detected during systems integration and at what cleanliness level, the decision maker can perform trades amongst different inspection and analysis methods and determine if a planned method is adequate to meet system requirements and manage contamination risk.
Samsudin, Mohd Dinie Muhaimin; Mat Don, Mashitah
2015-01-01
Oil palm trunk (OPT) sap was utilized for growth and bioethanol production by Saccharomycescerevisiae with addition of palm oil mill effluent (POME) as nutrients supplier. Maximum yield (YP/S) was attained at 0.464g bioethanol/g glucose presence in the OPT sap-POME-based media. However, OPT sap and POME are heterogeneous in properties and fermentation performance might change if it is repeated. Contribution of parametric uncertainty analysis on bioethanol fermentation performance was then assessed using Monte Carlo simulation (stochastic variable) to determine probability distributions due to fluctuation and variation of kinetic model parameters. Results showed that based on 100,000 samples tested, the yield (YP/S) ranged 0.423-0.501g/g. Sensitivity analysis was also done to evaluate the impact of each kinetic parameter on the fermentation performance. It is found that bioethanol fermentation highly depend on growth of the tested yeast. Copyright © 2014 Elsevier Ltd. All rights reserved.
Adherence to infection control guidelines in surgery on MRSA positive patients : A cost analysis.
Saegeman, V; Schuermans, A
2016-09-01
In surgical units, similar to other healthcare departments, guidelines are used to curb transmission of methicillin resistant Staphylococcus aureus (MRSA). The aim of this study was to calculate the extra costs for material and extra working hours for compliance to MRSA infection control guidelines in the operating rooms of a University Hospital. The study was based on observations of surgeries on MRSA positive patients. The average cost per surgery was calculated utilizing local information on unit costs. Robustness of the calculations was evaluated with a sensitivity analysis. The total extra costs of adherence to MRSA infection control guidelines averaged € 340.46 per surgical procedure (range € 207.76- € 473.15). A sensitivity analysis based on a standardized operating room hourly rate reached a cost of € 366.22. The extra costs of adherence to infection control guidelines are considerable. To reduce costs, the logistical planning of surgeries could be improved by for instance a dedicated room.
NASA Astrophysics Data System (ADS)
Tan, C.; Fang, W.
2018-04-01
Forest disturbance induced by tropical cyclone often has significant and profound effects on the structure and function of forest ecosystem. Detection and analysis of post-disaster forest disturbance based on remote sensing technology has been widely applied. At present, it is necessary to conduct further quantitative analysis of the magnitude of forest disturbance with the intensity of typhoon. In this study, taking the case of super typhoon Rammasun (201409), we analysed the sensitivity of four common used remote sensing indices and explored the relationship between remote sensing index and corresponding wind speeds based on pre-and post- Landsat-8 OLI (Operational Land Imager) images and a parameterized wind field model. The results proved that NBR is the most sensitive index for the detection of forest disturbance induced by Typhoon Rammasun and the variation of NBR has a significant linear dependence relation with the simulated 3-second gust wind speed.
Analyzing reflective narratives to assess the ethical reasoning of pediatric residents.
Moon, Margaret; Taylor, Holly A; McDonald, Erin L; Hughes, Mark T; Beach, Mary Catherine; Carrese, Joseph A
2013-01-01
A limiting factor in ethics education in medical training has been difficulty in assessing competence in ethics. This study was conducted to test the concept that content analysis of pediatric residents' personal reflections about ethics experiences can identify changes in ethical sensitivity and reasoning over time. Analysis of written narratives focused on two of our ethics curriculum's goals: 1) To raise sensitivity to ethical issues in everyday clinical practice and 2) to enhance critical reflection on personal and professional values as they affect patient care. Content analysis of written reflections was guided by a tool developed to identify and assess the level of ethical reasoning in eight domains determined to be important aspects of ethical competence. Based on the assessment of narratives written at two times (12 to 16 months/apart) during their training, residents showed significant progress in two specific domains: use of professional values, and use of personal values. Residents did not show decline in ethical reasoning in any domain. This study demonstrates that content analysis of personal narratives may provide a useful method for assessment of developing ethical sensitivity and reasoning.
Tagliafico, Alberto Stefano; Bignotti, Bianca; Rossi, Federica; Signori, Alessio; Sormani, Maria Pia; Valdora, Francesca; Calabrese, Massimo; Houssami, Nehmat
2016-08-01
To estimate sensitivity and specificity of CESM for breast cancer diagnosis. Systematic review and meta-analysis of the accuracy of CESM in finding breast cancer in highly selected women. We estimated summary receiver operating characteristic curves, sensitivity and specificity according to quality criteria with QUADAS-2. Six hundred four studies were retrieved, 8 of these reporting on 920 patients with 994 lesions, were eligible for inclusion. Estimated sensitivity from all studies was: 0.98 (95% CI: 0.96-1.00). Specificity was estimated from six studies reporting raw data: 0.58 (95% CI: 0.38-0.77). The majority of studies were scored as at high risk of bias due to the very selected populations. CESM has a high sensitivity but very low specificity. The source studies were based on highly selected case series and prone to selection bias. High-quality studies are required to assess the accuracy of CESM in unselected cases. Copyright © 2016 Elsevier Ltd. All rights reserved.
Xiong, Yi-Quan; Ma, Shu-Juan; Zhou, Jun-Hua; Zhong, Xue-Shan; Chen, Qing
2016-06-01
Barrett's esophagus (BE) is considered the most important risk factor for development of esophageal adenocarcinoma. Confocal laser endomicroscopy (CLE) is a recently developed technique used to diagnose neoplasia in BE. This meta-analysis was performed to assess the accuracy of CLE for diagnosis of neoplasia in BE. We searched EMBASE, PubMed, Cochrane Library, and Web of Science to identify relevant studies for all articles published up to June 27, 2015 in English. The quality of included studies was assessed using QUADAS-2. Per-patient and per-lesion pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio with 95% confidence intervals (CIs) were calculated. In total, 14 studies were included in the final analysis, covering 789 patients with 4047 lesions. Seven studies were included in the per-patient analysis. Pooled sensitivity and specificity were 89% (95% CI: 0.82-0.94) and 83% (95% CI: 0.78-0.86), respectively. Ten studies were included in the per-lesion analysis. Compared with the PP analysis, the corresponding pooled sensitivity declined to 77% (95% CI: 0.73-0.81) and specificity increased to 89% (95% CI: 0.87-0.90). Subgroup analysis showed that probe-based CLE (pCLE) was superior to endoscope-based CLE (eCLE) in pooled specificity [91.4% (95% CI: 89.7-92.9) vs 86.1% (95% CI: 84.3-87.8)] and AUC for the sROC (0.885 vs 0.762). Confocal laser endomicroscopy is a valid method to accurately differentiate neoplasms from non-neoplasms in BE. It can be applied to BE surveillance and early diagnosis of esophageal adenocarcinoma. © 2015 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.
A Pro-active Real-time Forecasting and Decision Support System for Daily Management of Marine Works
NASA Astrophysics Data System (ADS)
Bollen, Mark; Leyssen, Gert; Smets, Steven; De Wachter, Tom
2016-04-01
Marine Works involving turbidity generating activities (eg. dredging, dredge spoil placement) can generate environmental stress in and around a project area in the form of sediment plumes causing light reduction and sedimentation. If these works are situated near sensitive habitats like sea-grass beds, coral reefs or sensitive human activities eg. aquaculture farms or water intakes, or if contaminants are present in the water soil environmental scrutiny is advised. Environmental Regulations can impose limitations to these activities in the form of turbidity thresholds, spill budgets, contaminant levels. Breaching environmental regulations can result in increased monitoring, adaptation of the works planning and production rates and ultimately in a (temporary) stop of activities all of which entail time and cost impacts for a contractor and/or client. Sediment plume behaviour is governed by the dredging process, soil properties and ambient conditions (currents, water depth) and can be modelled. Usually this is done during the preparatory EIA phase of a project, for estimation of environmental impact based on climatic scenarios. An operational forecasting tool is developed to adapt marine work schedules to the real-time circumstances and thus evade exceedance of critical threshold levels at sensitive areas. The forecasting system is based on a Python-based workflow manager with a MySQL database and a Django frontend web tool for user interaction and visualisation of the model results. The core consists of a numerical hydrodynamic model with sediment transport module (Mike21 from DHI). This model is driven by space and time varying wind fields and wave boundary conditions, and turbidity inputs (suspended sediment source terms) based on marine works production rates and soil properties. The resulting threshold analysis allows the operator to indicate potential impact at the sensitive areas and instigate an adaption of the marine work schedule if needed. In order to use this toolbox in real-time situations and facilitate forecasting of impacts of planned dredge works, the following operational online functionalities are implemented: • Automated fetch and preparation of the input data, including 7 day forecast wind and wave fields and real-time measurements, and user defined the turbidity inputs based on scheduled marine works. • Generate automated forecasts and running user configurable scenarios at the same time in parallel. • Export and convert the model results, time series and maps, into a standardized format (netcdf). • Automatic analysis and processing of model results, including the calculation of indicator turbidity values and the exceedance analysis of threshold levels at the different sensitive areas. Data assimilation with the real time on site turbidity measurements is implemented in this threshold analysis. • Pre-programmed generation of animated sediment plumes, specific charts and pdf reports to allow a rapid interpretation of the model results by the operators and facilitating decision making in the operational planning. The performed marine works, resulting from the marine work schedule proposed by the forecasting system, are evaluated by a threshold analysis on the validated turbidity measurements on the sensitive sites. This machine learning loop allows a check of the system in order to evaluate forecast and model uncertainties.
ANSYS-based birefringence property analysis of side-hole fiber induced by pressure and temperature
NASA Astrophysics Data System (ADS)
Zhou, Xinbang; Gong, Zhenfeng
2018-03-01
In this paper, we theoretically investigate the influences of pressure and temperature on the birefringence property of side-hole fibers with different shapes of holes using the finite element analysis method. A physical mechanism of the birefringence of the side-hole fiber is discussed with the presence of different external pressures and temperatures. The strain field distribution and birefringence values of circular-core, rectangular-core, and triangular-core side-hole fibers are presented. Our analysis shows the triangular-core side-hole fiber has low temperature sensitivity which weakens the cross sensitivity of temperature and strain. Additionally, an optimized structure design of the side-hole fiber is presented which can be used for the sensing application.
Huang, Jiacong; Gao, Junfeng; Yan, Renhua
2016-08-15
Phosphorus (P) export from lowland polders has caused severe water pollution. Numerical models are an important resource that help water managers control P export. This study coupled three models, i.e., Phosphorus Dynamic model for Polders (PDP), Integrated Catchments model of Phosphorus dynamics (INCA-P) and Universal Soil Loss Equation (USLE), to describe the P dynamics in polders. Based on the coupled models and a dataset collected from Polder Jian in China, sensitivity analysis were carried out to analyze the cause-effect relationships between environmental factors and P export from Polder Jian. The sensitivity analysis results showed that P export from Polder Jian were strongly affected by air temperature, precipitation and fertilization. Proper fertilization management should be a strategic priority for reducing P export from Polder Jian. This study demonstrated the success of model coupling, and its application in investigating potential strategies to support pollution control in polder systems. Copyright © 2016. Published by Elsevier B.V.
Sensitivity and Nonlinearity of Thermoacoustic Oscillations
NASA Astrophysics Data System (ADS)
Juniper, Matthew P.; Sujith, R. I.
2018-01-01
Nine decades of rocket engine and gas turbine development have shown that thermoacoustic oscillations are difficult to predict but can usually be eliminated with relatively small ad hoc design changes. These changes can, however, be ruinously expensive to devise. This review explains why linear and nonlinear thermoacoustic behavior is so sensitive to parameters such as operating point, fuel composition, and injector geometry. It shows how nonperiodic behavior arises in experiments and simulations and discusses how fluctuations in thermoacoustic systems with turbulent reacting flow, which are usually filtered or averaged out as noise, can reveal useful information. Finally, it proposes tools to exploit this sensitivity in the future: adjoint-based sensitivity analysis to optimize passive control designs and complex systems theory to warn of impending thermoacoustic oscillations and to identify the most sensitive elements of a thermoacoustic system.
NASA Astrophysics Data System (ADS)
Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua
2014-07-01
Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.
Solar energy system economic evaluation for IBM System 3, Glendo, Wyoming
NASA Technical Reports Server (NTRS)
1980-01-01
This analysis was based on the technical and economic models in f-chart design procedures with inputs based on the characteristics of the parameters of present worth of system cost over a projected twenty year life: life cycle savings, year of positive savings, and year of payback for the optimized solar energy system at each of the analysis sites. The sensitivity of the economic evaluation to uncertainties in constituent system and economic variables was also investigated.
Bernal-Martinez, L.; Castelli, M. V.; Rodriguez-Tudela, J. L.; Cuenca-Estrella, M.
2014-01-01
A retrospective analysis of real-time PCR (RT-PCR) results for 151 biopsy samples obtained from 132 patients with proven invasive fungal diseases was performed. PCR-based techniques proved to be fast and sensitive and enabled definitive diagnosis in all cases studied, with detection of a total of 28 fungal species. PMID:24574295
NASA Technical Reports Server (NTRS)
Johnston, John D.; Parrish, Keith; Howard, Joseph M.; Mosier, Gary E.; McGinnis, Mark; Bluth, Marcel; Kim, Kevin; Ha, Hong Q.
2004-01-01
This is a continuation of a series of papers on modeling activities for JWST. The structural-thermal- optical, often referred to as "STOP", analysis process is used to predict the effect of thermal distortion on optical performance. The benchmark STOP analysis for JWST assesses the effect of an observatory slew on wavefront error. The paper begins an overview of multi-disciplinary engineering analysis, or integrated modeling, which is a critical element of the JWST mission. The STOP analysis process is then described. This process consists of the following steps: thermal analysis, structural analysis, and optical analysis. Temperatures predicted using geometric and thermal math models are mapped to the structural finite element model in order to predict thermally-induced deformations. Motions and deformations at optical surfaces are input to optical models and optical performance is predicted using either an optical ray trace or WFE estimation techniques based on prior ray traces or first order optics. Following the discussion of the analysis process, results based on models representing the design at the time of the System Requirements Review. In addition to baseline performance predictions, sensitivity studies are performed to assess modeling uncertainties. Of particular interest is the sensitivity of optical performance to uncertainties in temperature predictions and variations in metal properties. The paper concludes with a discussion of modeling uncertainty as it pertains to STOP analysis.
Bramley, Kyle; Pisani, Margaret A.; Murphy, Terrence E.; Araujo, Katy; Homer, Robert; Puchalski, Jonathan
2016-01-01
Background EBUS-guided transbronchial needle aspiration (TBNA) is important in the evaluation of thoracic lymphadenopathy. Reliably providing excellent diagnostic yield for malignancy, its diagnosis of sarcoidosis is inconsistent. Furthermore, when larger “core” biopsy samples of malignant tissue are required, TBNA may not suffice. The primary objective of this study was to determine if the sequential use of TBNA and a novel technique called cautery-assisted transbronchial forceps biopsies (ca-TBFB) was safe. Secondary outcomes included sensitivity and successful acquisition of tissue. Methods Fifty unselected patients undergoing convex probe EBUS were prospectively enrolled. Under EBUS guidance, all lymph nodes ≥ 1 cm were sequentially biopsied using TBNA and ca-TBFB. Safety and sensitivity were assessed at the nodal level for 111 nodes. Results of each technique were also reported on a per-patient basis. Results There were no significant adverse events. In nodes determined to be malignant, TBNA provided higher sensitivity (100%) than ca-TBFB (78%). However, among nodes with granulomatous inflammation, ca-TBFB exhibited higher sensitivity (90%) than TBNA (33%). For analysis based on patients rather than nodes, 6 of the 31 patients with malignancy would have been missed or understaged if the diagnosis was based on samples obtained by ca-TBFB. On the other hand, 3 of 8 patients with sarcoidosis would have been missed if analysis was based only on TBNA samples. In some cases only ca-TBFB acquired sufficient tissue for the core samples needed in clinical trials of malignancy. Conclusions The sequential use of TBNA and ca-TBFB appears to be safe. The larger samples obtained from ca-TBFB increased its sensitivity to detect granulomatous disease and provided specimens for clinical trials of malignancy when needle biopsies were insufficient. For thoracic surgeons and advanced bronchoscopists, we advocate ca-TBFB as an alternative to TBNA in select clinical scenarios. PMID:26912301
Bramley, Kyle; Pisani, Margaret A; Murphy, Terrence E; Araujo, Katy L; Homer, Robert J; Puchalski, Jonathan T
2016-05-01
Endobronchial ultrasound (EBUS)-guided transbronchial needle aspiration (TBNA) is important in the evaluation of thoracic lymphadenopathy. Reliably providing excellent diagnostic yield for malignancy, its diagnosis of sarcoidosis is inconsistent. Furthermore, TBNA may not suffice when larger "core biopsy" samples of malignant tissue are required. The primary objective of this study was to determine if the sequential use of TBNA and a novel technique called cautery-assisted transbronchial forceps biopsy (ca-TBFB) was safe. Secondary outcomes included sensitivity and successful acquisition of tissue. The study prospectively enrolled 50 unselected patients undergoing convex-probe EBUS. All lymph nodes exceeding 1 cm were sequentially biopsied under EBUS guidance using TBNA and ca-TBFB. Safety and sensitivity were assessed at the nodal level for 111 nodes. Results of each technique were also reported for each patient. There were no significant adverse events. In nodes determined to be malignant, TBNA provided higher sensitivity (100%) than ca-TBFB (78%). However, among nodes with granulomatous inflammation, ca-TBFB exhibited higher sensitivity (90%) than TBNA (33%). On the one hand, for analysis based on patients rather than nodes, 6 of the 31 patients with malignancy would have been missed or understaged if the diagnosis were based on samples obtained by ca-TBFB. On the other hand, 3 of 8 patients with sarcoidosis would have been missed if analysis were based only on TBNA samples. In some patients, only ca-TBFB acquired sufficient tissue for the core samples needed in clinical trials of malignancy. The sequential use of TBNA and ca-TBFB appears to be safe. The larger samples obtained from ca-TBFB increased its sensitivity to detect granulomatous disease and provided adequate specimens for clinical trials of malignancy when specimens from needle biopsies were insufficient. For thoracic surgeons and advanced bronchoscopists, we advocate ca-TBFB as an alternative to TBNA in select clinical scenarios. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Ultrasonography for endoleak detection after endoluminal abdominal aortic aneurysm repair.
Abraha, Iosief; Luchetta, Maria Laura; De Florio, Rita; Cozzolino, Francesco; Casazza, Giovanni; Duca, Piergiorgio; Parente, Basso; Orso, Massimiliano; Germani, Antonella; Eusebi, Paolo; Montedori, Alessandro
2017-06-09
People with abdominal aortic aneurysm who receive endovascular aneurysm repair (EVAR) need lifetime surveillance to detect potential endoleaks. Endoleak is defined as persistent blood flow within the aneurysm sac following EVAR. Computed tomography (CT) angiography is considered the reference standard for endoleak surveillance. Colour duplex ultrasound (CDUS) and contrast-enhanced CDUS (CE-CDUS) are less invasive but considered less accurate than CT. To determine the diagnostic accuracy of colour duplex ultrasound (CDUS) and contrast-enhanced-colour duplex ultrasound (CE-CDUS) in terms of sensitivity and specificity for endoleak detection after endoluminal abdominal aortic aneurysm repair (EVAR). We searched MEDLINE, Embase, LILACS, ISI Conference Proceedings, Zetoc, and trial registries in June 2016 without language restrictions and without use of filters to maximize sensitivity. Any cross-sectional diagnostic study evaluating participants who received EVAR by both ultrasound (with or without contrast) and CT scan assessed at regular intervals. Two pairs of review authors independently extracted data and assessed quality of included studies using the QUADAS 1 tool. A third review author resolved discrepancies. The unit of analysis was number of participants for the primary analysis and number of scans performed for the secondary analysis. We carried out a meta-analysis to estimate sensitivity and specificity of CDUS or CE-CDUS using a bivariate model. We analysed each index test separately. As potential sources of heterogeneity, we explored year of publication, characteristics of included participants (age and gender), direction of the study (retrospective, prospective), country of origin, number of CDUS operators, and ultrasound manufacturer. We identified 42 primary studies with 4220 participants. Twenty studies provided accuracy data based on the number of individual participants (seven of which provided data with and without the use of contrast). Sixteen of these studies evaluated the accuracy of CDUS. These studies were generally of moderate to low quality: only three studies fulfilled all the QUADAS items; in six (40%) of the studies, the delay between the tests was unclear or longer than four weeks; in eight (50%), the blinding of either the index test or the reference standard was not clearly reported or was not performed; and in two studies (12%), the interpretation of the reference standard was not clearly reported. Eleven studies evaluated the accuracy of CE-CDUS. These studies were of better quality than the CDUS studies: five (45%) studies fulfilled all the QUADAS items; four (36%) did not report clearly the blinding interpretation of the reference standard; and two (18%) did not clearly report the delay between the two tests.Based on the bivariate model, the summary estimates for CDUS were 0.82 (95% confidence interval (CI) 0.66 to 0.91) for sensitivity and 0.93 (95% CI 0.87 to 0.96) for specificity whereas for CE-CDUS the estimates were 0.94 (95% CI 0.85 to 0.98) for sensitivity and 0.95 (95% CI 0.90 to 0.98) for specificity. Regression analysis showed that CE-CDUS was superior to CDUS in terms of sensitivity (LR Chi 2 = 5.08, 1 degree of freedom (df); P = 0.0242 for model improvement).Seven studies provided estimates before and after administration of contrast. Sensitivity before contrast was 0.67 (95% CI 0.47 to 0.83) and after contrast was 0.97 (95% CI 0.92 to 0.99). The improvement in sensitivity with of contrast use was statistically significant (LR Chi 2 = 13.47, 1 df; P = 0.0002 for model improvement).Regression testing showed evidence of statistically significant effect bias related to year of publication and study quality within individual participants based CDUS studies. Sensitivity estimates were higher in the studies published before 2006 than the estimates obtained from studies published in 2006 or later (P < 0.001); and studies judged as low/unclear quality provided higher estimates in sensitivity. When regression testing was applied to the individual based CE-CDUS studies, none of the items, namely direction of the study design, quality, and age, were identified as a source of heterogeneity.Twenty-two studies provided accuracy data based on number of scans performed (of which four provided data with and without the use of contrast). Analysis of the studies that provided scan based data showed similar results. Summary estimates for CDUS (18 studies) showed 0.72 (95% CI 0.55 to 0.85) for sensitivity and 0.95 (95% CI 0.90 to 0.96) for specificity whereas summary estimates for CE-CDUS (eight studies) were 0.91 (95% CI 0.68 to 0.98) for sensitivity and 0.89 (95% CI 0.71 to 0.96) for specificity. This review demonstrates that both ultrasound modalities (with or without contrast) showed high specificity. For ruling in endoleaks, CE-CDUS appears superior to CDUS. In an endoleak surveillance programme CE-CDUS can be introduced as a routine diagnostic modality followed by CT scan only when the ultrasound is positive to establish the type of endoleak and the subsequent therapeutic management.
Howard Evan Canfield; Vicente L. Lopes
2000-01-01
A process-based, simulation model for evaporation, soil water and streamflow (BROOK903) was used to estimate soil moisture change on a semiarid rangeland watershed in southeastern Arizona. A sensitivity analysis was performed to select parameters affecting ET and soil moisture for calibration. Automatic parameter calibration was performed using a procedure based on a...
Advancing Clinical Proteomics via Analysis Based on Biological Complexes: A Tale of Five Paradigms.
Goh, Wilson Wen Bin; Wong, Limsoon
2016-09-02
Despite advances in proteomic technologies, idiosyncratic data issues, for example, incomplete coverage and inconsistency, resulting in large data holes, persist. Moreover, because of naïve reliance on statistical testing and its accompanying p values, differential protein signatures identified from such proteomics data have little diagnostic power. Thus, deploying conventional analytics on proteomics data is insufficient for identifying novel drug targets or precise yet sensitive biomarkers. Complex-based analysis is a new analytical approach that has potential to resolve these issues but requires formalization. We categorize complex-based analysis into five method classes or paradigms and propose an even-handed yet comprehensive evaluation rubric based on both simulated and real data. The first four paradigms are well represented in the literature. The fifth and newest paradigm, the network-paired (NP) paradigm, represented by a method called Extremely Small SubNET (ESSNET), dominates in precision-recall and reproducibility, maintains strong performance in small sample sizes, and sensitively detects low-abundance complexes. In contrast, the commonly used over-representation analysis (ORA) and direct-group (DG) test paradigms maintain good overall precision but have severe reproducibility issues. The other two paradigms considered here are the hit-rate and rank-based network analysis paradigms; both of these have good precision-recall and reproducibility, but they do not consider low-abundance complexes. Therefore, given its strong performance, NP/ESSNET may prove to be a useful approach for improving the analytical resolution of proteomics data. Additionally, given its stability, it may also be a powerful new approach toward functional enrichment tests, much like its ORA and DG counterparts.
NASA Astrophysics Data System (ADS)
Graham, Eleanor; Cuore Collaboration
2017-09-01
The CUORE experiment is a large-scale bolometric detector seeking to observe the never-before-seen process of neutrinoless double beta decay. Predictions for CUORE's sensitivity to neutrinoless double beta decay allow for an understanding of the half-life ranges that the detector can probe, and also to evaluate the relative importance of different detector parameters. Currently, CUORE uses a Bayesian analysis based in BAT, which uses Metropolis-Hastings Markov Chain Monte Carlo, for its sensitivity studies. My work evaluates the viability and potential improvements of switching the Bayesian analysis to Hamiltonian Monte Carlo, realized through the program Stan and its Morpho interface. I demonstrate that the BAT study can be successfully recreated in Stan, and perform a detailed comparison between the results and computation times of the two methods.