Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil; ...
2017-01-24
Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil
Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less
Boundary formulations for sensitivity analysis without matrix derivatives
NASA Technical Reports Server (NTRS)
Kane, J. H.; Guru Prasad, K.
1993-01-01
A new hybrid approach to continuum structural shape sensitivity analysis employing boundary element analysis (BEA) is presented. The approach uses iterative reanalysis to obviate the need to factor perturbed matrices in the determination of surface displacement and traction sensitivities via a univariate perturbation/finite difference (UPFD) step. The UPFD approach makes it possible to immediately reuse existing subroutines for computation of BEA matrix coefficients in the design sensitivity analysis process. The reanalysis technique computes economical response of univariately perturbed models without factoring perturbed matrices. The approach provides substantial computational economy without the burden of a large-scale reprogramming effort.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2015-05-01
Sensitivity analysis is an essential paradigm in Earth and Environmental Systems modeling. However, the term "sensitivity" has a clear definition, based in partial derivatives, only when specified locally around a particular point (e.g., optimal solution) in the problem space. Accordingly, no unique definition exists for "global sensitivity" across the problem space, when considering one or more model responses to different factors such as model parameters or forcings. A variety of approaches have been proposed for global sensitivity analysis, based on different philosophies and theories, and each of these formally characterizes a different "intuitive" understanding of sensitivity. These approaches focus on different properties of the model response at a fundamental level and may therefore lead to different (even conflicting) conclusions about the underlying sensitivities. Here we revisit the theoretical basis for sensitivity analysis, summarize and critically evaluate existing approaches in the literature, and demonstrate their flaws and shortcomings through conceptual examples. We also demonstrate the difficulty involved in interpreting "global" interaction effects, which may undermine the value of existing interpretive approaches. With this background, we identify several important properties of response surfaces that are associated with the understanding and interpretation of sensitivities in the context of Earth and Environmental System models. Finally, we highlight the need for a new, comprehensive framework for sensitivity analysis that effectively characterizes all of the important sensitivity-related properties of model response surfaces.
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2014-12-01
Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.
2D Decision-Making for Multi-Criteria Design Optimization
2006-05-01
participating in the same subproblem, information on the tradeoffs between different subproblems is obtained from a sensitivity analysis and used for...accomplished by some other mechanism. For the coordination between subproblem, we use the lexicographical ordering approach for multicriteria ...Sensitivity analysis Our approach uses sensitivity results from nonlinear programming (Fiacco, 1983; Luenberger, 2003), for which we first
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin
2016-04-01
Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Sensitivity Analysis for Coupled Aero-structural Systems
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.
1999-01-01
A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.
Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.
2007-01-01
To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks
Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544
2010-01-01
Multi-Disciplinary, Multi-Output Sensitivity Analysis ( MIMOSA ) .........29 3.1 Introduction to Research Thrust 1...39 3.3 MIMOSA Approach ..........................................................................................41 3.3.1...Collaborative Consistency of MIMOSA .......................................................41 3.3.2 Formulation of MIMOSA
Multiple shooting shadowing for sensitivity analysis of chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Blonigan, Patrick J.; Wang, Qiqi
2018-02-01
Sensitivity analysis methods are important tools for research and design with simulations. Many important simulations exhibit chaotic dynamics, including scale-resolving turbulent fluid flow simulations. Unfortunately, conventional sensitivity analysis methods are unable to compute useful gradient information for long-time-averaged quantities in chaotic dynamical systems. Sensitivity analysis with least squares shadowing (LSS) can compute useful gradient information for a number of chaotic systems, including simulations of chaotic vortex shedding and homogeneous isotropic turbulence. However, this gradient information comes at a very high computational cost. This paper presents multiple shooting shadowing (MSS), a more computationally efficient shadowing approach than the original LSS approach. Through an analysis of the convergence rate of MSS, it is shown that MSS can have lower memory usage and run time than LSS.
NASA Technical Reports Server (NTRS)
Park, Nohpill; Reagan, Shawn; Franks, Greg; Jones, William G.
1999-01-01
This paper discusses analytical approaches to evaluating performance of Spacecraft On-Board Computing systems, thereby ultimately achieving a reliable spacecraft data communications systems. The sensitivity analysis approach of memory system on the ProSEDS (Propulsive Small Expendable Deployer System) as a part of its data communication system will be investigated. Also, general issues and possible approaches to reliable Spacecraft On-Board Interconnection Network and Processor Array will be shown. The performance issues of a spacecraft on-board computing systems such as sensitivity, throughput, delay and reliability will be introduced and discussed.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin
2015-04-01
Earth and Environmental Systems (EES) models are essential components of research, development, and decision-making in science and engineering disciplines. With continuous advances in understanding and computing power, such models are becoming more complex with increasingly more factors to be specified (model parameters, forcings, boundary conditions, etc.). To facilitate better understanding of the role and importance of different factors in producing the model responses, the procedure known as 'Sensitivity Analysis' (SA) can be very helpful. Despite the availability of a large body of literature on the development and application of various SA approaches, two issues continue to pose major challenges: (1) Ambiguous Definition of Sensitivity - Different SA methods are based in different philosophies and theoretical definitions of sensitivity, and can result in different, even conflicting, assessments of the underlying sensitivities for a given problem, (2) Computational Cost - The cost of carrying out SA can be large, even excessive, for high-dimensional problems and/or computationally intensive models. In this presentation, we propose a new approach to sensitivity analysis that addresses the dual aspects of 'effectiveness' and 'efficiency'. By effective, we mean achieving an assessment that is both meaningful and clearly reflective of the objective of the analysis (the first challenge above), while by efficiency we mean achieving statistically robust results with minimal computational cost (the second challenge above). Based on this approach, we develop a 'global' sensitivity analysis framework that efficiently generates a newly-defined set of sensitivity indices that characterize a range of important properties of metric 'response surfaces' encountered when performing SA on EES models. Further, we show how this framework embraces, and is consistent with, a spectrum of different concepts regarding 'sensitivity', and that commonly-used SA approaches (e.g., Sobol, Morris, etc.) are actually limiting cases of our approach under specific conditions. Multiple case studies are used to demonstrate the value of the new framework. The results show that the new framework provides a fundamental understanding of the underlying sensitivities for any given problem, while requiring orders of magnitude fewer model runs.
Adjoint sensitivity analysis of plasmonic structures using the FDTD method.
Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H
2014-05-15
We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 2. Application
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Based on the theoretical framework for sensitivity analysis called "Variogram Analysis of Response Surfaces" (VARS), developed in the companion paper, we develop and implement a practical "star-based" sampling strategy (called STAR-VARS), for the application of VARS to real-world problems. We also develop a bootstrap approach to provide confidence level estimates for the VARS sensitivity metrics and to evaluate the reliability of inferred factor rankings. The effectiveness, efficiency, and robustness of STAR-VARS are demonstrated via two real-data hydrological case studies (a 5-parameter conceptual rainfall-runoff model and a 45-parameter land surface scheme hydrology model), and a comparison with the "derivative-based" Morris and "variance-based" Sobol approaches are provided. Our results show that STAR-VARS provides reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being 1-2 orders of magnitude more efficient than the Morris or Sobol approaches.
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
NASA Astrophysics Data System (ADS)
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
Analysis of the sensitivity properties of a model of vector-borne bubonic plague.
Buzby, Megan; Neckels, David; Antolin, Michael F; Estep, Donald
2008-09-06
Model sensitivity is a key to evaluation of mathematical models in ecology and evolution, especially in complex models with numerous parameters. In this paper, we use some recently developed methods for sensitivity analysis to study the parameter sensitivity of a model of vector-borne bubonic plague in a rodent population proposed by Keeling & Gilligan. The new sensitivity tools are based on a variational analysis involving the adjoint equation. The new approach provides a relatively inexpensive way to obtain derivative information about model output with respect to parameters. We use this approach to determine the sensitivity of a quantity of interest (the force of infection from rats and their fleas to humans) to various model parameters, determine a region over which linearization at a specific parameter reference point is valid, develop a global picture of the output surface, and search for maxima and minima in a given region in the parameter space.
Robust Sensitivity Analysis of Courses of Action Using an Additive Value Model
2008-03-01
According to Clemen , sensitivity analysis answers, “What makes a difference in this decision?” (2001:175). Sensitivity analysis can also indicate...alternative to change. These models look for the new weighting that causes a specific alternative to rank above all others. 19 Barron and Schmidt first... Schmidt , 1988:123). A smaller objective function value indicates greater sensitivity. Wolters and Mareschal propose a similar approach using goal
Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries
Lu, Zhiming
2018-01-30
Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less
Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Zhiming
Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-01-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987
Sensitivity Analysis of Multicriteria Choice to Changes in Intervals of Value Tradeoffs
NASA Astrophysics Data System (ADS)
Podinovski, V. V.
2018-03-01
An approach to sensitivity (stability) analysis of nondominated alternatives to changes in the bounds of intervals of value tradeoffs, where the alternatives are selected based on interval data of criteria tradeoffs is proposed. Methods of computations for the analysis of sensitivity of individual nondominated alternatives and the set of such alternatives as a whole are developed.
Sensitivity analysis of a wing aeroelastic response
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.; Eldred, Lloyd B.; Barthelemy, Jean-Francois M.
1991-01-01
A variation of Sobieski's Global Sensitivity Equations (GSE) approach is implemented to obtain the sensitivity of the static aeroelastic response of a three-dimensional wing model. The formulation is quite general and accepts any aerodynamics and structural analysis capability. An interface code is written to convert one analysis's output to the other's input, and visa versa. Local sensitivity derivatives are calculated by either analytic methods or finite difference techniques. A program to combine the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives is developed. The aerodynamic analysis package FAST, using a lifting surface theory, and a structural package, ELAPS, implementing Giles' equivalent plate model are used.
Shape design sensitivity analysis using domain information
NASA Technical Reports Server (NTRS)
Seong, Hwal-Gyeong; Choi, Kyung K.
1985-01-01
A numerical method for obtaining accurate shape design sensitivity information for built-up structures is developed and demonstrated through analysis of examples. The basic character of the finite element method, which gives more accurate domain information than boundary information, is utilized for shape design sensitivity improvement. A domain approach for shape design sensitivity analysis of built-up structures is derived using the material derivative idea of structural mechanics and the adjoint variable method of design sensitivity analysis. Velocity elements and B-spline curves are introduced to alleviate difficulties in generating domain velocity fields. The regularity requirements of the design velocity field are studied.
3.8 Proposed approach to uncertainty quantification and sensitivity analysis in the next PA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, Greg; Wohlwend, Jen
2017-10-02
This memorandum builds upon Section 3.8 of SRNL (2016) and Flach (2017) by defining key error analysis, uncertainty quantification, and sensitivity analysis concepts and terms, in preparation for the next E-Area Performance Assessment (WSRC 2008) revision.
Van de Walle, P; Hallemans, A; Schwartz, M; Truijen, S; Gosselink, R; Desloovere, K
2012-02-01
Gait efficiency in children with cerebral palsy is usually quantified by metabolic energy expenditure. Mechanical energy estimations, however, can be a valuable supplement as they can be assessed during gait analysis and plotted over the gait cycle, thus revealing information on timing and sources of increases in energy expenditure. Unfortunately, little information on validity and sensitivity exists. Three mechanical estimation approaches: (1) centre of mass (CoM) approach, (2) sum of segmental energies (SSE) approach and (3) integrated joint power approach, were validated against oxygen consumption and each other. Sensitivity was assessed in typical gait and in children with diplegia. CoM approach underestimated total energy expenditure and showed poor sensitivity. SSE approach overestimated energy expenditure and showed acceptable sensitivity. Validity and sensitivity were best in the integrated joint power approach. This method is therefore preferred for mechanical energy estimation in children with diplegia. However, mechanical energy should supplement, not replace metabolic energy, as total energy expended is not captured in any mechanical approach. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2015-12-01
Earth and environmental systems models (EESMs) are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. Complexity and dimensionality are manifested by introducing many different factors in EESMs (i.e., model parameters, forcings, boundary conditions, etc.) to be identified. Sensitivity Analysis (SA) provides an essential means for characterizing the role and importance of such factors in producing the model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to 'variogram analysis', that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are limiting cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
NASA Technical Reports Server (NTRS)
Hou, Jean W.
1985-01-01
The thermal analysis and the calculation of thermal sensitivity of a cure cycle in autoclave processing of thick composite laminates were studied. A finite element program for the thermal analysis and design derivatives calculation for temperature distribution and the degree of cure was developed and verified. It was found that the direct differentiation was the best approach for the thermal design sensitivity analysis. In addition, the approach of the direct differentiation provided time histories of design derivatives which are of great value to the cure cycle designers. The approach of direct differentiation is to be used for further study, i.e., the optimal cycle design.
NASA Astrophysics Data System (ADS)
Meliga, Philippe
2017-07-01
We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to which relevant information can be gained from a hybrid modeling computing self-consistent sensitivities from the postprocessing of DNS data. Application to alternative control objectives such as increasing the lift and alleviating the fluctuating drag and lift is also discussed.
Are quantitative sensitivity analysis methods always reliable?
NASA Astrophysics Data System (ADS)
Huang, X.
2016-12-01
Physical parameterizations developed to represent subgrid-scale physical processes include various uncertain parameters, leading to large uncertainties in today's Earth System Models (ESMs). Sensitivity Analysis (SA) is an efficient approach to quantitatively determine how the uncertainty of the evaluation metric can be apportioned to each parameter. Also, SA can identify the most influential parameters, as a result to reduce the high dimensional parametric space. In previous studies, some SA-based approaches, such as Sobol' and Fourier amplitude sensitivity testing (FAST), divide the parameters into sensitive and insensitive groups respectively. The first one is reserved but the other is eliminated for certain scientific study. However, these approaches ignore the disappearance of the interactive effects between the reserved parameters and the eliminated ones, which are also part of the total sensitive indices. Therefore, the wrong sensitive parameters might be identified by these traditional SA approaches and tools. In this study, we propose a dynamic global sensitivity analysis method (DGSAM), which iteratively removes the least important parameter until there are only two parameters left. We use the CLM-CASA, a global terrestrial model, as an example to verify our findings with different sample sizes ranging from 7000 to 280000. The result shows DGSAM has abilities to identify more influential parameters, which is confirmed by parameter calibration experiments using four popular optimization methods. For example, optimization using Top3 parameters filtered by DGSAM could achieve substantial improvement against Sobol' by 10%. Furthermore, the current computational cost for calibration has been reduced to 1/6 of the original one. In future, it is necessary to explore alternative SA methods emphasizing parameter interactions.
A new sensitivity analysis for structural optimization of composite rotor blades
NASA Technical Reports Server (NTRS)
Venkatesan, C.; Friedmann, P. P.; Yuan, Kuo-An
1993-01-01
This paper presents a detailed mathematical derivation of the sensitivity derivatives for the structural dynamic, aeroelastic stability and response characteristics of a rotor blade in hover and forward flight. The formulation is denoted by the term semianalytical approach, because certain derivatives have to be evaluated by a finite difference scheme. Using the present formulation, sensitivity derivatives for the structural dynamic and aeroelastic stability characteristics, were evaluated for both isotropic and composite rotor blades. Based on the results, useful conclusions are obtained regarding the relative merits of the semi-analytical approach, for calculating sensitivity derivatives, when compared to a pure finite difference approach.
Mixed kernel function support vector regression for global sensitivity analysis
NASA Astrophysics Data System (ADS)
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
2017-01-31
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less
Hoffmann, Max J; Engelmann, Felix; Matera, Sebastian
2017-01-28
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO 2 (110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less
NASA Astrophysics Data System (ADS)
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
2017-01-01
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.
Nagel, Michael; Bolivar, Peter Haring; Brucherseifer, Martin; Kurz, Heinrich; Bosserhoff, Anja; Büttner, Reinhard
2002-04-01
A promising label-free approach for the analysis of genetic material by means of detecting the hybridization of polynucleotides with electromagnetic waves at terahertz (THz) frequencies is presented. Using an integrated waveguide approach, incorporating resonant THz structures as sample carriers and transducers for the analysis of the DNA molecules, we achieve a sensitivity down to femtomolar levels. The approach is demonstrated with time-domain ultrafast techniques based on femtosecond laser pulses for generating and electro-optically detecting broadband THz signals, although the principle can certainly be transferred to other THz technologies.
An easily implemented static condensation method for structural sensitivity analysis
NASA Technical Reports Server (NTRS)
Gangadharan, S. N.; Haftka, R. T.; Nikolaidis, E.
1990-01-01
A black-box approach to static condensation for sensitivity analysis is presented with illustrative examples of a cube and a car structure. The sensitivity of the structural response with respect to joint stiffness parameter is calculated using the direct method, forward-difference, and central-difference schemes. The efficiency of the various methods for identifying joint stiffness parameters from measured static deflections of these structures is compared. The results indicate that the use of static condensation can reduce computation times significantly and the black-box approach is only slightly less efficient than the standard implementation of static condensation. The ease of implementation of the black-box approach recommends it for use with general-purpose finite element codes that do not have a built-in facility for static condensation.
Comparative Sensitivity Analysis of Muscle Activation Dynamics
Günther, Michael; Götz, Thomas
2015-01-01
We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379
A global sensitivity analysis approach for morphogenesis models.
Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G
2015-11-21
Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1991-01-01
Continuing studies associated with the development of the quasi-analytical (QA) sensitivity method for three dimensional transonic flow about wings are presented. Furthermore, initial results using the quasi-analytical approach were obtained and compared to those computed using the finite difference (FD) approach. The basic goals achieved were: (1) carrying out various debugging operations pertaining to the quasi-analytical method; (2) addition of section design variables to the sensitivity equation in the form of multiple right hand sides; (3) reconfiguring the analysis/sensitivity package in order to facilitate the execution of analysis/FD/QA test cases; and (4) enhancing the display of output data to allow careful examination of the results and to permit various comparisons of sensitivity derivatives obtained using the FC/QA methods to be conducted easily and quickly. In addition to discussing the above goals, the results of executing subcritical and supercritical test cases are presented.
On Learning Cluster Coefficient of Private Networks
Wang, Yue; Wu, Xintao; Zhu, Jun; Xiang, Yang
2013-01-01
Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as clustering coefficient or modularity often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we treat a graph statistics as a function f and develop a divide and conquer approach to enforce differential privacy. The basic procedure of this approach is to first decompose the target computation f into several less complex unit computations f1, …, fm connected by basic mathematical operations (e.g., addition, subtraction, multiplication, division), then perturb the output of each fi with Laplace noise derived from its own sensitivity value and the distributed privacy threshold εi, and finally combine those perturbed fi as the perturbed output of computation f. We examine how various operations affect the accuracy of complex computations. When unit computations have large global sensitivity values, we enforce the differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We illustrate our approach by using clustering coefficient, which is a popular statistics used in social network analysis. Empirical evaluations on five real social networks and various synthetic graphs generated from three random graph models show the developed divide and conquer approach outperforms the direct approach. PMID:24429843
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
NASA Technical Reports Server (NTRS)
Hou, Gene
2004-01-01
The focus of this research is on the development of analysis and sensitivity analysis equations for nonlinear, transient heat transfer problems modeled by p-version, time discontinuous finite element approximation. The resulting matrix equation of the state equation is simply in the form ofA(x)x = c, representing a single step, time marching scheme. The Newton-Raphson's method is used to solve the nonlinear equation. Examples are first provided to demonstrate the accuracy characteristics of the resultant finite element approximation. A direct differentiation approach is then used to compute the thermal sensitivities of a nonlinear heat transfer problem. The report shows that only minimal coding effort is required to enhance the analysis code with the sensitivity analysis capability.
Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria
NASA Astrophysics Data System (ADS)
Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong
2017-08-01
In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.
VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox
NASA Astrophysics Data System (ADS)
Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.
2016-12-01
VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.
Development of Multiobjective Optimization Techniques for Sonic Boom Minimization
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.
1996-01-01
A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.
Noise spectroscopy as an equilibrium analysis tool for highly sensitive electrical biosensing
NASA Astrophysics Data System (ADS)
Guo, Qiushi; Kong, Tao; Su, Ruigong; Zhang, Qi; Cheng, Guosheng
2012-08-01
We demonstrate an approach for highly sensitive bio-detection based on silicon nanowire field-effect transistors by employing low frequency noise spectroscopy analysis. The inverse of noise amplitude of the device exhibits an enhanced gate coupling effect in strong inversion regime when measured in buffer solution than that in air. The approach was further validated by the detection of cardiac troponin I of 0.23 ng/ml in fetal bovine serum, in which 2 orders of change in noise amplitude was characterized. The selectivity of the proposed approach was also assessed by the addition of 10 μg/ml bovine serum albumin solution.
Quantile regression in the presence of monotone missingness with sensitivity analysis
Liu, Minzhao; Daniels, Michael J.; Perri, Michael G.
2016-01-01
In this paper, we develop methods for longitudinal quantile regression when there is monotone missingness. In particular, we propose pattern mixture models with a constraint that provides a straightforward interpretation of the marginal quantile regression parameters. Our approach allows sensitivity analysis which is an essential component in inference for incomplete data. To facilitate computation of the likelihood, we propose a novel way to obtain analytic forms for the required integrals. We conduct simulations to examine the robustness of our approach to modeling assumptions and compare its performance to competing approaches. The model is applied to data from a recent clinical trial on weight management. PMID:26041008
Corso, Phaedra S.; Ingels, Justin B.; Kogan, Steven M.; Foster, E. Michael; Chen, Yi-Fu; Brody, Gene H.
2013-01-01
Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95% confidence interval) incremental difference was $2149 ($397, $3901). With the probabilistic sensitivity analysis approach, the incremental difference was $2583 ($778, $4346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention. PMID:23299559
Corso, Phaedra S; Ingels, Justin B; Kogan, Steven M; Foster, E Michael; Chen, Yi-Fu; Brody, Gene H
2013-10-01
Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95 % confidence interval) incremental difference was $2,149 ($397, $3,901). With the probabilistic sensitivity analysis approach, the incremental difference was $2,583 ($778, $4,346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention.
Source apportionment and sensitivity analysis: two methodologies with two different purposes
NASA Astrophysics Data System (ADS)
Clappier, Alain; Belis, Claudio A.; Pernigotti, Denise; Thunis, Philippe
2017-11-01
This work reviews the existing methodologies for source apportionment and sensitivity analysis to identify key differences and stress their implicit limitations. The emphasis is laid on the differences between source impacts
(sensitivity analysis) and contributions
(source apportionment) obtained by using four different methodologies: brute-force top-down, brute-force bottom-up, tagged species and decoupled direct method (DDM). A simple theoretical example to compare these approaches is used highlighting differences and potential implications for policy. When the relationships between concentration and emissions are linear, impacts and contributions are equivalent concepts. In this case, source apportionment and sensitivity analysis may be used indifferently for both air quality planning purposes and quantifying source contributions. However, this study demonstrates that when the relationship between emissions and concentrations is nonlinear, sensitivity approaches are not suitable to retrieve source contributions and source apportionment methods are not appropriate to evaluate the impact of abatement strategies. A quantification of the potential nonlinearities should therefore be the first step prior to source apportionment or planning applications, to prevent any limitations in their use. When nonlinearity is mild, these limitations may, however, be acceptable in the context of the other uncertainties inherent to complex models. Moreover, when using sensitivity analysis for planning, it is important to note that, under nonlinear circumstances, the calculated impacts will only provide information for the exact conditions (e.g. emission reduction share) that are simulated.
First- and second-order sensitivity analysis of linear and nonlinear structures
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Mroz, Z.
1986-01-01
This paper employs the principle of virtual work to derive sensitivity derivatives of structural response with respect to stiffness parameters using both direct and adjoint approaches. The computations required are based on additional load conditions characterized by imposed initial strains, body forces, or surface tractions. As such, they are equally applicable to numerical or analytical solution techniques. The relative efficiency of various approaches for calculating first and second derivatives is assessed. It is shown that for the evaluation of second derivatives the most efficient approach is one that makes use of both the first-order sensitivities and adjoint vectors. Two example problems are used for demonstrating the various approaches.
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 1. Theory
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Computer simulation models are continually growing in complexity with increasingly more factors to be identified. Sensitivity Analysis (SA) provides an essential means for understanding the role and importance of these factors in producing model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to "variogram analysis," that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. Synthetic functions that resemble actual model response surfaces are used to illustrate the concepts, and show VARS to be as much as two orders of magnitude more computationally efficient than the state-of-the-art Sobol approach. In a companion paper, we propose a practical implementation strategy, and demonstrate the effectiveness, efficiency, and reliability (robustness) of the VARS framework on real-data case studies.
Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
NASA Technical Reports Server (NTRS)
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
Tian, Yuan; Hassmiller Lich, Kristen; Osgood, Nathaniel D; Eom, Kirsten; Matchar, David B
2016-11-01
As health services researchers and decision makers tackle more difficult problems using simulation models, the number of parameters and the corresponding degree of uncertainty have increased. This often results in reduced confidence in such complex models to guide decision making. To demonstrate a systematic approach of linked sensitivity analysis, calibration, and uncertainty analysis to improve confidence in complex models. Four techniques were integrated and applied to a System Dynamics stroke model of US veterans, which was developed to inform systemwide intervention and research planning: Morris method (sensitivity analysis), multistart Powell hill-climbing algorithm and generalized likelihood uncertainty estimation (calibration), and Monte Carlo simulation (uncertainty analysis). Of 60 uncertain parameters, sensitivity analysis identified 29 needing calibration, 7 that did not need calibration but significantly influenced key stroke outcomes, and 24 not influential to calibration or stroke outcomes that were fixed at their best guess values. One thousand alternative well-calibrated baselines were obtained to reflect calibration uncertainty and brought into uncertainty analysis. The initial stroke incidence rate among veterans was identified as the most influential uncertain parameter, for which further data should be collected. That said, accounting for current uncertainty, the analysis of 15 distinct prevention and treatment interventions provided a robust conclusion that hypertension control for all veterans would yield the largest gain in quality-adjusted life years. For complex health care models, a mixed approach was applied to examine the uncertainty surrounding key stroke outcomes and the robustness of conclusions. We demonstrate that this rigorous approach can be practical and advocate for such analysis to promote understanding of the limits of certainty in applying models to current decisions and to guide future data collection. © The Author(s) 2016.
Ion-Sensitive Field-Effect Transistor for Biological Sensing
Lee, Chang-Soo; Kim, Sang Kyu; Kim, Moonil
2009-01-01
In recent years there has been great progress in applying FET-type biosensors for highly sensitive biological detection. Among them, the ISFET (ion-sensitive field-effect transistor) is one of the most intriguing approaches in electrical biosensing technology. Here, we review some of the main advances in this field over the past few years, explore its application prospects, and discuss the main issues, approaches, and challenges, with the aim of stimulating a broader interest in developing ISFET-based biosensors and extending their applications for reliable and sensitive analysis of various biomolecules such as DNA, proteins, enzymes, and cells. PMID:22423205
On Sensitivity Analysis within the 4DVAR Framework
2014-02-01
sitivity’’ (AS) approach, Lee et al. (2001) estimated the sensitivity of the Indonesian Throughflow to remote wind forcing, Losch and Heimbach ( 2007 ...of massive paral- lelization. The ensemble sensitivity (ES) analysis (e.g., Ancell and Hakim 2007 ; Torn and Hakim 2008) follows the basic principle of...variational assimila- tion techniques (e.g., Cao et al. 2007 ; Liu et al. 2008; Yaremchuk et al. 2009; Clayton et al. 2013). In particular, Yaremchuk
Hyperspectral data analysis procedures with reduced sensitivity to noise
NASA Technical Reports Server (NTRS)
Landgrebe, David A.
1993-01-01
Multispectral sensor systems have become steadily improved over the years in their ability to deliver increased spectral detail. With the advent of hyperspectral sensors, including imaging spectrometers, this technology is in the process of taking a large leap forward, thus providing the possibility of enabling delivery of much more detailed information. However, this direction of development has drawn even more attention to the matter of noise and other deleterious effects in the data, because reducing the fundamental limitations of spectral detail on information collection raises the limitations presented by noise to even greater importance. Much current effort in remote sensing research is thus being devoted to adjusting the data to mitigate the effects of noise and other deleterious effects. A parallel approach to the problem is to look for analysis approaches and procedures which have reduced sensitivity to such effects. We discuss some of the fundamental principles which define analysis algorithm characteristics providing such reduced sensitivity. One such analysis procedure including an example analysis of a data set is described, illustrating this effect.
Digression and Value Concatenation to Enable Privacy-Preserving Regression.
Li, Xiao-Bai; Sarkar, Sumit
2014-09-01
Regression techniques can be used not only for legitimate data analysis, but also to infer private information about individuals. In this paper, we demonstrate that regression trees, a popular data-analysis and data-mining technique, can be used to effectively reveal individuals' sensitive data. This problem, which we call a "regression attack," has not been addressed in the data privacy literature, and existing privacy-preserving techniques are not appropriate in coping with this problem. We propose a new approach to counter regression attacks. To protect against privacy disclosure, our approach introduces a novel measure, called digression , which assesses the sensitive value disclosure risk in the process of building a regression tree model. Specifically, we develop an algorithm that uses the measure for pruning the tree to limit disclosure of sensitive data. We also propose a dynamic value-concatenation method for anonymizing data, which better preserves data utility than a user-defined generalization scheme commonly used in existing approaches. Our approach can be used for anonymizing both numeric and categorical data. An experimental study is conducted using real-world financial, economic and healthcare data. The results of the experiments demonstrate that the proposed approach is very effective in protecting data privacy while preserving data quality for research and analysis.
Sensitivity Analysis of the Static Aeroelastic Response of a Wing
NASA Technical Reports Server (NTRS)
Eldred, Lloyd B.
1993-01-01
A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.
Nestorov, I A; Aarons, L J; Rowland, M
1997-08-01
Sensitivity analysis studies the effects of the inherent variability and uncertainty in model parameters on the model outputs and may be a useful tool at all stages of the pharmacokinetic modeling process. The present study examined the sensitivity of a whole-body physiologically based pharmacokinetic (PBPK) model for the distribution kinetics of nine 5-n-alkyl-5-ethyl barbituric acids in arterial blood and 14 tissues (lung, liver, kidney, stomach, pancreas, spleen, gut, muscle, adipose, skin, bone, heart, brain, testes) after i.v. bolus administration to rats. The aims were to obtain new insights into the model used, to rank the model parameters involved according to their impact on the model outputs and to study the changes in the sensitivity induced by the increase in the lipophilicity of the homologues on ascending the series. Two approaches for sensitivity analysis have been implemented. The first, based on the Matrix Perturbation Theory, uses a sensitivity index defined as the normalized sensitivity of the 2-norm of the model compartmental matrix to perturbations in its entries. The second approach uses the traditional definition of the normalized sensitivity function as the relative change in a model state (a tissue concentration) corresponding to a relative change in a model parameter. Autosensitivity has been defined as sensitivity of a state to any of its parameters; cross-sensitivity as the sensitivity of a state to any other states' parameters. Using the two approaches, the sensitivity of representative tissue concentrations (lung, liver, kidney, stomach, gut, adipose, heart, and brain) to the following model parameters: tissue-to-unbound plasma partition coefficients, tissue blood flows, unbound renal and intrinsic hepatic clearance, permeability surface area product of the brain, have been analyzed. Both the tissues and the parameters were ranked according to their sensitivity and impact. The following general conclusions were drawn: (i) the overall sensitivity of the system to all parameters involved is small due to the weak connectivity of the system structure; (ii) the time course of both the auto- and cross-sensitivity functions for all tissues depends on the dynamics of the tissues themselves, e.g., the higher the perfusion of a tissue, the higher are both its cross-sensitivity to other tissues' parameters and the cross-sensitivities of other tissues to its parameters; and (iii) with a few exceptions, there is not a marked influence of the lipophilicity of the homologues on either the pattern or the values of the sensitivity functions. The estimates of the sensitivity and the subsequent tissue and parameter rankings may be extended to other drugs, sharing the same common structure of the whole body PBPK model, and having similar model parameters. Results show also that the computationally simple Matrix Perturbation Analysis should be used only when an initial idea about the sensitivity of a system is required. If comprehensive information regarding the sensitivity is needed, the numerically expensive Direct Sensitivity Analysis should be used.
Use of SUSA in Uncertainty and Sensitivity Analysis for INL VHTR Coupled Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom
2010-06-01
The need for a defendable and systematic Uncertainty and Sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008.The GRS (Gesellschaft für Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This interim milestone report provides an overview of the current status of themore » implementation and testing of SUSA at the INL VHTR Project Office.« less
Eigenvalue Contributon Estimator for Sensitivity Calculations with TSUNAMI-3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Williams, Mark L
2007-01-01
Since the release of the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) codes in SCALE [1], the use of sensitivity and uncertainty analysis techniques for criticality safety applications has greatly increased within the user community. In general, sensitivity and uncertainty analysis is transitioning from a technique used only by specialists to a practical tool in routine use. With the desire to use the tool more routinely comes the need to improve the solution methodology to reduce the input and computational burden on the user. This paper reviews the current solution methodology of the Monte Carlo eigenvalue sensitivity analysismore » sequence TSUNAMI-3D, describes an alternative approach, and presents results from both methodologies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groen, E.A., E-mail: Evelyne.Groen@gmail.com; Heijungs, R.; Leiden University, Einsteinweg 2, Leiden 2333 CC
Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlationsmore » between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.« less
NASA Astrophysics Data System (ADS)
Hameed, M.; Demirel, M. C.; Moradkhani, H.
2015-12-01
Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.
Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis
Adnan, Tassha Hilda
2016-01-01
Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446
A flexible, interpretable framework for assessing sensitivity to unmeasured confounding.
Dorie, Vincent; Harada, Masataka; Carnegie, Nicole Bohme; Hill, Jennifer
2016-09-10
When estimating causal effects, unmeasured confounding and model misspecification are both potential sources of bias. We propose a method to simultaneously address both issues in the form of a semi-parametric sensitivity analysis. In particular, our approach incorporates Bayesian Additive Regression Trees into a two-parameter sensitivity analysis strategy that assesses sensitivity of posterior distributions of treatment effects to choices of sensitivity parameters. This results in an easily interpretable framework for testing for the impact of an unmeasured confounder that also limits the number of modeling assumptions. We evaluate our approach in a large-scale simulation setting and with high blood pressure data taken from the Third National Health and Nutrition Examination Survey. The model is implemented as open-source software, integrated into the treatSens package for the R statistical programming language. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Simulation-based sensitivity analysis for non-ignorably missing data.
Yin, Peng; Shi, Jian Q
2017-01-01
Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.
An adjoint method of sensitivity analysis for residual vibrations of structures subject to impacts
NASA Astrophysics Data System (ADS)
Yan, Kun; Cheng, Gengdong
2018-03-01
For structures subject to impact loads, the residual vibration reduction is more and more important as the machines become faster and lighter. An efficient sensitivity analysis of residual vibration with respect to structural or operational parameters is indispensable for using a gradient based optimization algorithm, which reduces the residual vibration in either active or passive way. In this paper, an integrated quadratic performance index is used as the measure of the residual vibration, since it globally measures the residual vibration response and its calculation can be simplified greatly with Lyapunov equation. Several sensitivity analysis approaches for performance index were developed based on the assumption that the initial excitations of residual vibration were given and independent of structural design. Since the resulting excitations by the impact load often depend on structural design, this paper aims to propose a new efficient sensitivity analysis method for residual vibration of structures subject to impacts to consider the dependence. The new method is developed by combining two existing methods and using adjoint variable approach. Three numerical examples are carried out and demonstrate the accuracy of the proposed method. The numerical results show that the dependence of initial excitations on structural design variables may strongly affects the accuracy of sensitivities.
IATA for skin sensitization potential – 1 out of 2 or 2 out of 3? ...
To meet EU regulatory requirements and to avoid or minimize animal testing, there is a need for non-animal methods to assess skin sensitization potential. Given the complexity of the skin sensitization endpoint, there is an expectation that integrated testing and assessment approaches (IATA) will need to be developed which rely on assays representing key events in the pathway. Three non-animal assays have been formally validated: the direct peptide reactivity assay (DPRA), the KeratinoSensTM assay and the h-CLAT assay. At the same time, there have been many efforts to develop IATA with the “2 out of 3” approach attracting much attention whereby a chemical is classified on the basis of the majority outcome. A set of 271 chemicals with mouse, human and non-animal sensitization test data was evaluated to compare the predictive performances of the 3 individual non-animal assays, their binary combinations and the ‘2 out of 3’ approach. The analysis revealed that the most predictive approach was to use both the DPRA and h-CLAT: 1. Perform DPRA – if positive, classify as a sensitizer; 2. If negative, perform h-CLAT – a positive outcome denotes a sensitizer, a negative, a non-sensitizer. With this approach, 83% (LLNA) and 93% (human) of the non-sensitizer predictions were correct, in contrast to the ‘2 out of 3’ approach which had 69% (LLNA) and 79% (human) of non-sensitizer predictions correct. The views expressed are those of the authors and do not ne
Assessing geomorphic sensitivity in relation to river capacity for adjustment
NASA Astrophysics Data System (ADS)
Reid, H. E.; Brierley, G. J.
2015-12-01
River sensitivity describes the nature and rate of channel adjustments. An approach to analysis of geomorphic river sensitivity outlined in this paper relates potential sensitivity based on the expected capacity of adjustment for a river type to the recent history of channel adjustment. This approach was trialled to assess low, moderate and high geomorphic sensitivity for four different types of river (10 reaches in total) along the Lower Tongariro River, North Island, New Zealand. Building upon the River Styles framework, river types were differentiated based upon valley setting (width and confinement), channel planform, geomorphic unit assemblages and bed material size. From this, the behavioural regime and potential for adjustment (type and extent) were determined. Historical maps and aerial photographs were geo-rectified and the channel planform digitised to assess channel adjustments for each reach from 1928 to 2007. Floodplain width controlled by terraces, exerted a strong influence upon reach scale sensitivity for the partly-confined, wandering, cobble-bed river. Although forced boundaries occur infrequently, the width of the active channel zone is constrained. An unconfined braided river reach directly downstream of the terrace-confined section was the most geomorphically sensitive reach. The channel in this reach adjusted recurrently to sediment inputs that were flushed through more confined, better connected upstream reaches. A meandering, sand-bed river in downstream reaches has exhibited negligible rates of channel migration. However, channel narrowing in this reach and the associated delta indicate that the system is approaching a threshold condition, beyond which channel avulsion is likely to occur. As this would trigger more rapid migration, this reach is considered to be more geomorphically sensitive than analysis of its low migration rate alone would indicate. This demonstrates how sensitivity is fashioned both by the behavioural regime of a reach and flow/sediment input from upstream. The approach to assess geomorphic river sensitivity outlined here could support 'room to move' or 'freedom space' approaches to river management by relating likely channel adjustments for the type of river under consideration to the area of land that is required to contain 'natural' patterns and rates of geomorphic functionality.
NASA Astrophysics Data System (ADS)
Yun, Wanying; Lu, Zhenzhou; Jiang, Xian
2018-06-01
To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.
A framework for sensitivity analysis of decision trees.
Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław
2018-01-01
In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Newman, James C., III; Barnwell, Richard W.
1997-01-01
A three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed and is extended to model geometrically complex configurations. The advantage of unstructured grids (when compared with a structured-grid approach) is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional geometry and a Gauss-Seidel algorithm for the three-dimensional; similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Simple parameterization techniques are utilized for demonstrative purposes. Once the surface has been deformed, the unstructured grid is adapted by considering the mesh as a system of interconnected springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR (which is an advanced automatic-differentiation software tool). To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for a two-dimensional high-lift multielement airfoil and for a three-dimensional Boeing 747-200 aircraft.
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Hansen, James E. (Technical Monitor)
2001-01-01
A new approach is presented for the analysis of feedback processes in a nonlinear dynamical system by observing its variations. The new methodology consists of statistical estimates of the sensitivities between all pairs of variables in the system based on a neural network modeling of the dynamical system. The model can then be used to estimate the instantaneous, multivariate and nonlinear sensitivities, which are shown to be essential for the analysis of the feedbacks processes involved in the dynamical system. The method is described and tested on synthetic data from the low-order Lorenz circulation model where the correct sensitivities can be evaluated analytically.
Pseudotargeted MS Method for the Sensitive Analysis of Protein Phosphorylation in Protein Complexes.
Lyu, Jiawen; Wang, Yan; Mao, Jiawei; Yao, Yating; Wang, Shujuan; Zheng, Yong; Ye, Mingliang
2018-05-15
In this study, we presented an enrichment-free approach for the sensitive analysis of protein phosphorylation in minute amounts of samples, such as purified protein complexes. This method takes advantage of the high sensitivity of parallel reaction monitoring (PRM). Specifically, low confident phosphopeptides identified from the data-dependent acquisition (DDA) data set were used to build a pseudotargeted list for PRM analysis to allow the identification of additional phosphopeptides with high confidence. The development of this targeted approach is very easy as the same sample and the same LC-system were used for the discovery and the targeted analysis phases. No sample fractionation or enrichment was required for the discovery phase which allowed this method to analyze minute amount of sample. We applied this pseudotargeted MS method to quantitatively examine phosphopeptides in affinity purified endogenous Shc1 protein complexes at four temporal stages of EGF signaling and identified 82 phospho-sites. To our knowledge, this is the highest number of phospho-sites identified from the protein complexes. This pseudotargeted MS method is highly sensitive in the identification of low abundance phosphopeptides and could be a powerful tool to study phosphorylation-regulated assembly of protein complex.
Lee, Yeonok; Wu, Hulin
2012-01-01
Differential equation models are widely used for the study of natural phenomena in many fields. The study usually involves unknown factors such as initial conditions and/or parameters. It is important to investigate the impact of unknown factors (parameters and initial conditions) on model outputs in order to better understand the system the model represents. Apportioning the uncertainty (variation) of output variables of a model according to the input factors is referred to as sensitivity analysis. In this paper, we focus on the global sensitivity analysis of ordinary differential equation (ODE) models over a time period using the multivariate adaptive regression spline (MARS) as a meta model based on the concept of the variance of conditional expectation (VCE). We suggest to evaluate the VCE analytically using the MARS model structure of univariate tensor-product functions which is more computationally efficient. Our simulation studies show that the MARS model approach performs very well and helps to significantly reduce the computational cost. We present an application example of sensitivity analysis of ODE models for influenza infection to further illustrate the usefulness of the proposed method.
An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1991-01-01
The three dimensional quasi-analytical sensitivity analysis and the ancillary driver programs are developed needed to carry out the studies and perform comparisons. The code is essentially contained in one unified package which includes the following: (1) a three dimensional transonic wing analysis program (ZEBRA); (2) a quasi-analytical portion which determines the matrix elements in the quasi-analytical equations; (3) a method for computing the sensitivity coefficients from the resulting quasi-analytical equations; (4) a package to determine for comparison purposes sensitivity coefficients via the finite difference approach; and (5) a graphics package.
NASA Astrophysics Data System (ADS)
Núñez, M.; Robie, T.; Vlachos, D. G.
2017-10-01
Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).
An approach to measure parameter sensitivity in watershed ...
Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier for the Little Miami River (LMR) and Las Vegas Wash (LVW) watersheds were used for detail sensitivity analyses. To compare the relative sensitivities of the hydrologic parameters of these two models, we used Normalized Root Mean Square Error (NRMSE). By combining the NRMSE index with the flow duration curve analysis, we derived an approach to measure parameter sensitivities under different flow regimes. Results show that the parameters related to groundwater are highly sensitive in the LMR watershed, whereas the LVW watershed is primarily sensitive to near surface and impervious parameters. The high and medium flows are more impacted by most of the parameters. Low flow regime was highly sensitive to groundwater related parameters. Moreover, our approach is found to be useful in facilitating model development and calibration. This journal article describes hydrological modeling of climate change and land use changes on stream hydrology, and elucidates the importance of hydrological model construction in generating valid modeling results.
Partovi, Sasan; Yuh, Roger; Pirozzi, Sara; Lu, Ziang; Couturier, Spencer; Grosse, Ulrich; Schluchter, Mark D; Nelson, Aaron; Jones, Robert; O’Donnell, James K; Faulhaber, Peter
2017-01-01
The objective of this study was to assess the ability of a quantitative software-aided approach to improve the diagnostic accuracy of 18F FDG PET for Alzheimer’s dementia over visual analysis alone. Twenty normal subjects (M:F-12:8; mean age 80.6 years) and twenty mild AD subjects (M:F-12:8; mean age 70.6 years) with 18F FDG PET scans were obtained from the ADNI database. Three blinded readers interpreted these PET images first using a visual qualitative approach and then using a quantitative software-aided approach. Images were classified on two five-point scales based on normal/abnormal (1-definitely normal; 5-definitely abnormal) and presence of AD (1-definitely not AD; 5-definitely AD). Diagnostic sensitivity, specificity, and accuracy for both approaches were compared based on the aforementioned scales. The sensitivity, specificity, and accuracy for the normal vs. abnormal readings of all readers combined were higher when comparing the software-aided vs. visual approach (sensitivity 0.93 vs. 0.83 P = 0.0466; specificity 0.85 vs. 0.60 P = 0.0005; accuracy 0.89 vs. 0.72 P<0.0001). The specificity and accuracy for absence vs. presence of AD of all readers combined were higher when comparing the software-aided vs. visual approach (specificity 0.90 vs. 0.70 P = 0.0008; accuracy 0.81 vs. 0.72 P = 0.0356). Sensitivities of the software-aided and visual approaches did not differ significantly (0.72 vs. 0.73 P = 0.74). The quantitative software-aided approach appears to improve the performance of 18F FDG PET for the diagnosis of mild AD. It may be helpful for experienced 18F FDG PET readers analyzing challenging cases. PMID:28123864
Woolgar, Alexandra; Golland, Polina; Bode, Stefan
2014-09-01
Multivoxel pattern analysis (MVPA) is a sensitive and increasingly popular method for examining differences between neural activation patterns that cannot be detected using classical mass-univariate analysis. Recently, Todd et al. ("Confounds in multivariate pattern analysis: Theory and rule representation case study", 2013, NeuroImage 77: 157-165) highlighted a potential problem for these methods: high sensitivity to confounds at the level of individual participants due to the use of directionless summary statistics. Unlike traditional mass-univariate analyses where confounding activation differences in opposite directions tend to approximately average out at group level, group level MVPA results may be driven by any activation differences that can be discriminated in individual participants. In Todd et al.'s empirical data, factoring out differences in reaction time (RT) reduced a classifier's ability to distinguish patterns of activation pertaining to two task rules. This raises two significant questions for the field: to what extent have previous multivoxel discriminations in the literature been driven by RT differences, and by what methods should future studies take RT and other confounds into account? We build on the work of Todd et al. and compare two different approaches to remove the effect of RT in MVPA. We show that in our empirical data, in contrast to that of Todd et al., the effect of RT on rule decoding is negligible, and results were not affected by the specific details of RT modelling. We discuss the meaning of and sensitivity for confounds in traditional and multivoxel approaches to fMRI analysis. We observe that the increased sensitivity of MVPA comes at a price of reduced specificity, meaning that these methods in particular call for careful consideration of what differs between our conditions of interest. We conclude that the additional complexity of the experimental design, analysis and interpretation needed for MVPA is still not a reason to favour a less sensitive approach. Copyright © 2014 Elsevier Inc. All rights reserved.
Van Dessel, E; Fierens, K; Pattyn, P; Van Nieuwenhove, Y; Berrevoet, F; Troisi, R; Ceelen, W
2009-01-01
Approximately 5%-20% of colorectal cancer (CRC) patients present with synchronous potentially resectable liver metastatic disease. Preclinical and clinical studies suggest a benefit of the 'liver first' approach, i.e. resection of the liver metastasis followed by resection of the primary tumour. A formal decision analysis may support a rational choice between several therapy options. Survival and morbidity data were retrieved from relevant clinical studies identified by a Web of Science search. Data were entered into decision analysis software (TreeAge Pro 2009, Williamstown, MA, USA). Transition probabilities including the risk of death from complications or disease progression associated with individual therapy options were entered into the model. Sensitivity analysis was performed to evaluate the model's validity under a variety of assumptions. The result of the decision analysis confirms the superiority of the 'liver first' approach. Sensitivity analysis demonstrated that this assumption is valid on condition that the mortality associated with the hepatectomy first is < 4.5%, and that the mortality of colectomy performed after hepatectomy is < 3.2%. The results of this decision analysis suggest that, in patients with synchronous resectable colorectal liver metastases, the 'liver first' approach is to be preferred. Randomized trials will be needed to confirm the results of this simulation based outcome.
Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra; ...
2017-11-20
The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra
The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less
Gandhoke, Gurpreet S; Pease, Matthew; Smith, Kenneth J; Sekula, Raymond F
2017-09-01
To perform a cost-minimization study comparing the supraorbital and endoscopic endonasal (EEA) approach with or without craniotomy for the resection of olfactory groove meningiomas (OGMs). We built a decision tree using probabilities of gross total resection (GTR) and cerebrospinal fluid (CSF) leak rates with the supraorbital approach versus EEA with and without additional craniotomy. The cost (not charge or reimbursement) at each "stem" of this decision tree for both surgical options was obtained from our hospital's finance department. After a base case calculation, we applied plausible ranges to all parameters and carried out multiple 1-way sensitivity analyses. Probabilistic sensitivity analyses confirmed our results. The probabilities of GTR (0.8) and CSF leak (0.2) for the supraorbital craniotomy were obtained from our series of 5 patients who underwent a supraorbital approach for the resection of an OGM. The mean tumor volume was 54.6 cm 3 (range, 17-94.2 cm 3 ). Literature-reported rates of GTR (0.6) and CSF leak (0.3) with EEA were applied to our economic analysis. Supraorbital craniotomy was the preferred strategy, with an expected value of $29,423, compared with an EEA cost of $83,838. On multiple 1-way sensitivity analyses, supraorbital craniotomy remained the preferred strategy, with a minimum cost savings of $46,000 and a maximum savings of $64,000. Probabilistic sensitivity analysis found the lowest cost difference between the 2 surgical options to be $37,431. Compared with EEA, supraorbital craniotomy provides substantial cost savings in the treatment of OGMs. Given the potential differences in effectiveness between approaches, a cost-effectiveness analysis should be undertaken. Copyright © 2017 Elsevier Inc. All rights reserved.
Recent approaches for enhancing sensitivity in enantioseparations by CE.
Sánchez-Hernández, Laura; García-Ruiz, Carmen; Luisa Marina, María; Luis Crego, Antonio
2010-01-01
This article reviews the latest methodological and instrumental improvements for enhancing sensitivity in chiral analysis by CE. The review covers literature from March 2007 until May 2009, that is, the works published after the appearance of the latest review article on the same topic by Sánchez-Hernández et al. [Electrophoresis 2008, 29, 237-251]. Off-line and on-line sample treatment techniques, on-line sample preconcentration strategies based on electrophoretic and chromatographic principles, and alternative detection systems to the widely employed UV/Vis detection in CE are the most relevant approaches discussed for improving sensitivity. Microchip technologies are also included since they can open up great possibilities to achieve sensitive and fast enantiomeric separations.
Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...
2015-12-04
Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less
Sensitivity of surface meteorological analyses to observation networks
NASA Astrophysics Data System (ADS)
Tyndall, Daniel Paul
A computationally efficient variational analysis system for two-dimensional meteorological fields is developed and described. This analysis approach is most efficient when the number of analysis grid points is much larger than the number of available observations, such as for large domain mesoscale analyses. The analysis system is developed using MATLAB software and can take advantage of multiple processors or processor cores. A version of the analysis system has been exported as a platform independent application (i.e., can be run on Windows, Linux, or Macintosh OS X desktop computers without a MATLAB license) with input/output operations handled by commonly available internet software combined with data archives at the University of Utah. The impact of observation networks on the meteorological analyses is assessed by utilizing a percentile ranking of individual observation sensitivity and impact, which is computed by using the adjoint of the variational surface assimilation system. This methodology is demonstrated using a case study of the analysis from 1400 UTC 27 October 2010 over the entire contiguous United States domain. The sensitivity of this approach to the dependence of the background error covariance on observation density is examined. Observation sensitivity and impact provide insight on the influence of observations from heterogeneous observing networks as well as serve as objective metrics for quality control procedures that may help to identify stations with significant siting, reporting, or representativeness issues.
NASA Technical Reports Server (NTRS)
Baker, John; Thorpe, Ira
2012-01-01
Thoroughly studied classic space-based gravitational-wave missions concepts such as the Laser Interferometer Space Antenna (LISA) are based on laser-interferometry techniques. Ongoing developments in atom-interferometry techniques have spurred recently proposed alternative mission concepts. These different approaches can be understood on a common footing. We present an comparative analysis of how each type of instrument responds to some of the noise sources which may limiting gravitational-wave mission concepts. Sensitivity to laser frequency instability is essentially the same for either approach. Spacecraft acceleration reference stability sensitivities are different, allowing smaller spacecraft separations in the atom interferometry approach, but acceleration noise requirements are nonetheless similar. Each approach has distinct additional measurement noise issues.
Gradient-Based Aerodynamic Shape Optimization Using ADI Method for Large-Scale Problems
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Baysal, Oktay
1997-01-01
A gradient-based shape optimization methodology, that is intended for practical three-dimensional aerodynamic applications, has been developed. It is based on the quasi-analytical sensitivities. The flow analysis is rendered by a fully implicit, finite volume formulation of the Euler equations.The aerodynamic sensitivity equation is solved using the alternating-direction-implicit (ADI) algorithm for memory efficiency. A flexible wing geometry model, that is based on surface parameterization and platform schedules, is utilized. The present methodology and its components have been tested via several comparisons. Initially, the flow analysis for for a wing is compared with those obtained using an unfactored, preconditioned conjugate gradient approach (PCG), and an extensively validated CFD code. Then, the sensitivities computed with the present method have been compared with those obtained using the finite-difference and the PCG approaches. Effects of grid refinement and convergence tolerance on the analysis and shape optimization have been explored. Finally the new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4. Despite the expected increase in the computational time, the results indicate that shape optimization, which require large numbers of grid points can be resolved with a gradient-based approach.
Sensitivity-Uncertainty Based Nuclear Criticality Safety Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
2016-09-20
These are slides from a seminar given to the University of Mexico Nuclear Engineering Department. Whisper is a statistical analysis package developed to support nuclear criticality safety validation. It uses the sensitivity profile data for an application as computed by MCNP6 along with covariance files for the nuclear data to determine a baseline upper-subcritical-limit for the application. Whisper and its associated benchmark files are developed and maintained as part of MCNP6, and will be distributed with all future releases of MCNP6. Although sensitivity-uncertainty methods for NCS validation have been under development for 20 years, continuous-energy Monte Carlo codes such asmore » MCNP could not determine the required adjoint-weighted tallies for sensitivity profiles. The recent introduction of the iterated fission probability method into MCNP led to the rapid development of sensitivity analysis capabilities for MCNP6 and the development of Whisper. Sensitivity-uncertainty based methods represent the future for NCS validation – making full use of today’s computer power to codify past approaches based largely on expert judgment. Validation results are defensible, auditable, and repeatable as needed with different assumptions and process models. The new methods can supplement, support, and extend traditional validation approaches.« less
Computer simulation of the last support phase of the long jump.
Chow, John W; Hay, James G
2005-01-01
The purpose was to examine the interacting roles played by the approach velocity, the explosive strength (represented by vertical ground reaction force [VGRF]), and the change in angular momentum about a transverse axis through the jumper's center of mass (deltaHzz) during the last support phase of the long jump, using a computer simulation technique. A two-dimensional inverted-pendulum-plus-foot segment model was developed to simulate the last support phase. Using a reference jump derived from a jump performance reported in the literature, the effects of varying individual parameters were studied using sensitivity analyses. In each sensitivity analysis, the kinematic characteristics of the longest jumps with the deltaHzz considered and not considered when the parameter of interest was altered were noted. A sensitivity analysis examining the influence of altering both approach velocity and VGRF at the same time was also conducted. The major findings were that 1) the jump distance was more sensitive to changes in approach velocity (e.g., a 10% increase yielded a 10.0% increase in jump distance) than to changes in the VGRF (e.g., a 10% increase yielded a 7.2% increase in jump distance); 2) the relatively large change in jump distance when both the approach velocity and VGRF were altered (e.g., a 10% increase in both parameters yielded a 20.4% increase in jump distance), suggesting that these two parameters are not independent factors in determining the jump distance; and 3) the jump distance was overestimated if the deltaHzz was not considered in the analysis.
NASA Astrophysics Data System (ADS)
Döpking, Sandra; Plaisance, Craig P.; Strobusch, Daniel; Reuter, Karsten; Scheurer, Christoph; Matera, Sebastian
2018-01-01
In the last decade, first-principles-based microkinetic modeling has been developed into an important tool for a mechanistic understanding of heterogeneous catalysis. A commonly known, but hitherto barely analyzed issue in this kind of modeling is the presence of sizable errors from the use of approximate Density Functional Theory (DFT). We here address the propagation of these errors to the catalytic turnover frequency (TOF) by global sensitivity and uncertainty analysis. Both analyses require the numerical quadrature of high-dimensional integrals. To achieve this efficiently, we utilize and extend an adaptive sparse grid approach and exploit the confinement of the strongly non-linear behavior of the TOF to local regions of the parameter space. We demonstrate the methodology on a model of the oxygen evolution reaction at the Co3O4 (110)-A surface, using a maximum entropy error model that imposes nothing but reasonable bounds on the errors. For this setting, the DFT errors lead to an absolute uncertainty of several orders of magnitude in the TOF. We nevertheless find that it is still possible to draw conclusions from such uncertain models about the atomistic aspects controlling the reactivity. A comparison with derivative-based local sensitivity analysis instead reveals that this more established approach provides incomplete information. Since the adaptive sparse grids allow for the evaluation of the integrals with only a modest number of function evaluations, this approach opens the way for a global sensitivity analysis of more complex models, for instance, models based on kinetic Monte Carlo simulations.
Sensitivity of wildlife habitat models to uncertainties in GIS data
NASA Technical Reports Server (NTRS)
Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.
1992-01-01
Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
NASA Astrophysics Data System (ADS)
Šilhán, Karel; Stoffel, Markus
2015-05-01
Different approaches and thresholds have been utilized in the past to date landslides with growth ring series of disturbed trees. Past work was mostly based on conifer species because of their well-defined ring boundaries and the easy identification of compression wood after stem tilting. More recently, work has been expanded to include broad-leaved trees, which are thought to produce less and less evident reactions after landsliding. This contribution reviews recent progress made in dendrogeomorphic landslide analysis and introduces a new approach in which landslides are dated via ring eccentricity formed after tilting. We compare results of this new and the more conventional approaches. In addition, the paper also addresses tree sensitivity to landslide disturbance as a function of tree age and trunk diameter using 119 common beech (Fagus sylvatica L.) and 39 Crimean pine (Pinus nigra ssp. pallasiana) trees growing on two landslide bodies. The landslide events reconstructed with the classical approach (reaction wood) also appear as events in the eccentricity analysis, but the inclusion of eccentricity clearly allowed for more (162%) landslides to be detected in the tree-ring series. With respect to tree sensitivity, conifers and broad-leaved trees show the strongest reactions to landslides at ages comprised between 40 and 60 years, with a second phase of increased sensitivity in P. nigra at ages of ca. 120-130 years. These phases of highest sensitivities correspond with trunk diameters at breast height of 6-8 and 18-22 cm, respectively (P. nigra). This study thus calls for the inclusion of eccentricity analyses in future landslide reconstructions as well as for the selection of trees belonging to different age and diameter classes to allow for a well-balanced and more complete reconstruction of past events.
Dynamic analysis of process reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadle, L.J.; Lawson, L.O.; Noel, S.D.
1995-06-01
The approach and methodology of conducting a dynamic analysis is presented in this poster session in order to describe how this type of analysis can be used to evaluate the operation and control of process reactors. Dynamic analysis of the PyGas{trademark} gasification process is used to illustrate the utility of this approach. PyGas{trademark} is the gasifier being developed for the Gasification Product Improvement Facility (GPIF) by Jacobs-Siffine Engineering and Riley Stoker. In the first step of the analysis, process models are used to calculate the steady-state conditions and associated sensitivities for the process. For the PyGas{trademark} gasifier, the process modelsmore » are non-linear mechanistic models of the jetting fluidized-bed pyrolyzer and the fixed-bed gasifier. These process sensitivities are key input, in the form of gain parameters or transfer functions, to the dynamic engineering models.« less
Liu, Jianhua; Jiang, Hongbo; Zhang, Hao; Guo, Chun; Wang, Lei; Yang, Jing; Nie, Shaofa
2017-06-27
In the summer of 2014, an influenza A(H3N2) outbreak occurred in Yichang city, Hubei province, China. A retrospective study was conducted to collect and interpret hospital and epidemiological data on it using social network analysis and global sensitivity and uncertainty analyses. Results for degree (χ2=17.6619, P<0.0001) and betweenness(χ2=21.4186, P<0.0001) centrality suggested that the selection of sampling objects were different between traditional epidemiological methods and newer statistical approaches. Clique and network diagrams demonstrated that the outbreak actually consisted of two independent transmission networks. Sensitivity analysis showed that the contact coefficient (k) was the most important factor in the dynamic model. Using uncertainty analysis, we were able to better understand the properties and variations over space and time on the outbreak. We concluded that use of newer approaches were significantly more efficient for managing and controlling infectious diseases outbreaks, as well as saving time and public health resources, and could be widely applied on similar local outbreaks.
Dorfman, David M; LaPlante, Charlotte D; Pozdnyakova, Olga; Li, Betty
2015-11-01
In our high-sensitivity flow cytometric approach for systemic mastocytosis (SM), we identified mast cell event clustering as a new diagnostic criterion for the disease. To objectively characterize mast cell gated event distributions, we performed cluster analysis using FLOCK, a computational approach to identify cell subsets in multidimensional flow cytometry data in an unbiased, automated fashion. FLOCK identified discrete mast cell populations in most cases of SM (56/75 [75%]) but only a minority of non-SM cases (17/124 [14%]). FLOCK-identified mast cell populations accounted for 2.46% of total cells on average in SM cases and 0.09% of total cells on average in non-SM cases (P < .0001) and were predictive of SM, with a sensitivity of 75%, a specificity of 86%, a positive predictive value of 76%, and a negative predictive value of 85%. FLOCK analysis provides useful diagnostic information for evaluating patients with suspected SM, and may be useful for the analysis of other hematopoietic neoplasms. Copyright© by the American Society for Clinical Pathology.
Hahn, David W; Omenetto, Nicoló
2012-04-01
The first part of this two-part review focused on the fundamental and diagnostics aspects of laser-induced plasmas, only touching briefly upon concepts such as sensitivity and detection limits and largely omitting any discussion of the vast panorama of the practical applications of the technique. Clearly a true LIBS community has emerged, which promises to quicken the pace of LIBS developments, applications, and implementations. With this second part, a more applied flavor is taken, and its intended goal is summarizing the current state-of-the-art of analytical LIBS, providing a contemporary snapshot of LIBS applications, and highlighting new directions in laser-induced breakdown spectroscopy, such as novel approaches, instrumental developments, and advanced use of chemometric tools. More specifically, we discuss instrumental and analytical approaches (e.g., double- and multi-pulse LIBS to improve the sensitivity), calibration-free approaches, hyphenated approaches in which techniques such as Raman and fluorescence are coupled with LIBS to increase sensitivity and information power, resonantly enhanced LIBS approaches, signal processing and optimization (e.g., signal-to-noise analysis), and finally applications. An attempt is made to provide an updated view of the role played by LIBS in the various fields, with emphasis on applications considered to be unique. We finally try to assess where LIBS is going as an analytical field, where in our opinion it should go, and what should still be done for consolidating the technique as a mature method of chemical analysis. © 2012 Society for Applied Spectroscopy
Quantitative methods to direct exploration based on hydrogeologic information
Graettinger, A.J.; Lee, J.; Reeves, H.W.; Dethan, D.
2006-01-01
Quantitatively Directed Exploration (QDE) approaches based on information such as model sensitivity, input data covariance and model output covariance are presented. Seven approaches for directing exploration are developed, applied, and evaluated on a synthetic hydrogeologic site. The QDE approaches evaluate input information uncertainty, subsurface model sensitivity and, most importantly, output covariance to identify the next location to sample. Spatial input parameter values and covariances are calculated with the multivariate conditional probability calculation from a limited number of samples. A variogram structure is used during data extrapolation to describe the spatial continuity, or correlation, of subsurface information. Model sensitivity can be determined by perturbing input data and evaluating output response or, as in this work, sensitivities can be programmed directly into an analysis model. Output covariance is calculated by the First-Order Second Moment (FOSM) method, which combines the covariance of input information with model sensitivity. A groundwater flow example, modeled in MODFLOW-2000, is chosen to demonstrate the seven QDE approaches. MODFLOW-2000 is used to obtain the piezometric head and the model sensitivity simultaneously. The seven QDE approaches are evaluated based on the accuracy of the modeled piezometric head after information from a QDE sample is added. For the synthetic site used in this study, the QDE approach that identifies the location of hydraulic conductivity that contributes the most to the overall piezometric head variance proved to be the best method to quantitatively direct exploration. ?? IWA Publishing 2006.
Mukherjee, Shalini; Yadav, Rajeev; Yung, Iris; Zajdel, Daniel P.; Oken, Barry S.
2011-01-01
Objectives To determine 1) whether heart rate variability (HRV) was a sensitive and reliable measure in mental effort tasks carried out by healthy seniors and 2) whether non-linear approaches to HRV analysis, in addition to traditional time and frequency domain approaches were useful to study such effects. Methods Forty healthy seniors performed two visual working memory tasks requiring different levels of mental effort, while ECG was recorded. They underwent the same tasks and recordings two weeks later. Traditional and 13 non-linear indices of HRV including Poincaré, entropy and detrended fluctuation analysis (DFA) were determined. Results Time domain (especially mean R-R interval/RRI), frequency domain and, among nonlinear parameters- Poincaré and DFA were the most reliable indices. Mean RRI, time domain and Poincaré were also the most sensitive to different mental effort task loads and had the largest effect size. Conclusions Overall, linear measures were the most sensitive and reliable indices to mental effort. In non-linear measures, Poincaré was the most reliable and sensitive, suggesting possible usefulness as an independent marker in cognitive function tasks in healthy seniors. Significance A large number of HRV parameters was both reliable as well as sensitive indices of mental effort, although the simple linear methods were the most sensitive. PMID:21459665
A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China
NASA Astrophysics Data System (ADS)
Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.
2016-12-01
Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.
Unger, Jakob; Schuster, Maria; Hecker, Dietmar J; Schick, Bernhard; Lohscheller, Jörg
2016-01-01
This work presents a computer-based approach to analyze the two-dimensional vocal fold dynamics of endoscopic high-speed videos, and constitutes an extension and generalization of a previously proposed wavelet-based procedure. While most approaches aim for analyzing sustained phonation conditions, the proposed method allows for a clinically adequate analysis of both dynamic as well as sustained phonation paradigms. The analysis procedure is based on a spatio-temporal visualization technique, the phonovibrogram, that facilitates the documentation of the visible laryngeal dynamics. From the phonovibrogram, a low-dimensional set of features is computed using a principle component analysis strategy that quantifies the type of vibration patterns, irregularity, lateral symmetry and synchronicity, as a function of time. Two different test bench data sets are used to validate the approach: (I) 150 healthy and pathologic subjects examined during sustained phonation. (II) 20 healthy and pathologic subjects that were examined twice: during sustained phonation and a glissando from a low to a higher fundamental frequency. In order to assess the discriminative power of the extracted features, a Support Vector Machine is trained to distinguish between physiologic and pathologic vibrations. The results for sustained phonation sequences are compared to the previous approach. Finally, the classification performance of the stationary analyzing procedure is compared to the transient analysis of the glissando maneuver. For the first test bench the proposed procedure outperformed the previous approach (proposed feature set: accuracy: 91.3%, sensitivity: 80%, specificity: 97%, previous approach: accuracy: 89.3%, sensitivity: 76%, specificity: 96%). Comparing the classification performance of the second test bench further corroborates that analyzing transient paradigms provides clear additional diagnostic value (glissando maneuver: accuracy: 90%, sensitivity: 100%, specificity: 80%, sustained phonation: accuracy: 75%, sensitivity: 80%, specificity: 70%). The incorporation of parameters describing the temporal evolvement of vocal fold vibration clearly improves the automatic identification of pathologic vibration patterns. Furthermore, incorporating a dynamic phonation paradigm provides additional valuable information about the underlying laryngeal dynamics that cannot be derived from sustained conditions. The proposed generalized approach provides a better overall classification performance than the previous approach, and hence constitutes a new advantageous tool for an improved clinical diagnosis of voice disorders. Copyright © 2015 Elsevier B.V. All rights reserved.
Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS
Brown, C. S.; Zhang, Hongbin
2016-05-24
Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less
Analysis of proteins using DIGE and MALDI mass spectrometry
In this work the sensitivity of the quantitative proteomics approach 2D-DIGE/MS (twoDimensional Difference Gel Electrophoresis / Mass Spectrometry) was tested by detecting decreasing amounts of a specific protein at the low picomole and sub-picomole range. Sensitivity of the 2D-D...
Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V; Petway, Joy R
2017-07-12
This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH₃-N and NO₃-N. Results indicate that the integrated FME-GLUE-based model, with good Nash-Sutcliffe coefficients (0.53-0.69) and correlation coefficients (0.76-0.83), successfully simulates the concentrations of ON-N, NH₃-N and NO₃-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH₃-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO₃-N simulation, which was measured using global sensitivity.
MUSiC—An Automated Scan for Deviations between Data and Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Meyer, Arnd
2010-02-01
A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.
MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Arnd
2010-02-10
A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.
Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.
2009-01-01
We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.
Meta-analysis of diagnostic test data: a bivariate Bayesian modeling approach.
Verde, Pablo E
2010-12-30
In the last decades, the amount of published results on clinical diagnostic tests has expanded very rapidly. The counterpart to this development has been the formal evaluation and synthesis of diagnostic results. However, published results present substantial heterogeneity and they can be regarded as so far removed from the classical domain of meta-analysis, that they can provide a rather severe test of classical statistical methods. Recently, bivariate random effects meta-analytic methods, which model the pairs of sensitivities and specificities, have been presented from the classical point of view. In this work a bivariate Bayesian modeling approach is presented. This approach substantially extends the scope of classical bivariate methods by allowing the structural distribution of the random effects to depend on multiple sources of variability. Meta-analysis is summarized by the predictive posterior distributions for sensitivity and specificity. This new approach allows, also, to perform substantial model checking, model diagnostic and model selection. Statistical computations are implemented in the public domain statistical software (WinBUGS and R) and illustrated with real data examples. Copyright © 2010 John Wiley & Sons, Ltd.
Behavioral Dimensions in One-Year-Olds and Dimensional Stability in Infancy.
ERIC Educational Resources Information Center
Hagekull, Berit; And Others
1980-01-01
The dimensional structure of infants' behavioral repertoire was shown to be highly stable over 3 to 15 months of age. Factor analysis of parent questionnaire data produced seven factors named Intensity/Activity, Regularity, Approach-Withdrawal, Sensory Sensitivity, Attentiveness, Manageability and Sensitivity to New Food. An eighth factor,…
Fiero, Mallorie H; Hsu, Chiu-Hsieh; Bell, Melanie L
2017-11-20
We extend the pattern-mixture approach to handle missing continuous outcome data in longitudinal cluster randomized trials, which randomize groups of individuals to treatment arms, rather than the individuals themselves. Individuals who drop out at the same time point are grouped into the same dropout pattern. We approach extrapolation of the pattern-mixture model by applying multilevel multiple imputation, which imputes missing values while appropriately accounting for the hierarchical data structure found in cluster randomized trials. To assess parameters of interest under various missing data assumptions, imputed values are multiplied by a sensitivity parameter, k, which increases or decreases imputed values. Using simulated data, we show that estimates of parameters of interest can vary widely under differing missing data assumptions. We conduct a sensitivity analysis using real data from a cluster randomized trial by increasing k until the treatment effect inference changes. By performing a sensitivity analysis for missing data, researchers can assess whether certain missing data assumptions are reasonable for their cluster randomized trial. Copyright © 2017 John Wiley & Sons, Ltd.
Finite Element Model Calibration Approach for Area I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Finite Element Model Calibration Approach for Ares I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Alternatives for discounting in the analysis of noninferiority trials.
Snapinn, Steven M
2004-05-01
Determining the efficacy of an experimental therapy relative to placebo on the basis of an active-control noninferiority trial requires reference to historical placebo-controlled trials. The validity of the resulting comparison depends on two key assumptions: assay sensitivity and constancy. Since the truth of these assumptions cannot be verified, it seems logical to raise the standard of evidence required to declare efficacy; this concept is referred to as discounting. It is not often recognized that two common design and analysis approaches, setting a noninferiority margin and requiring preservation of a fraction of the standard therapy's effect, are forms of discounting. The noninferiority margin is a particularly poor approach, since its degree of discounting depends on an irrelevant factor. Preservation of effect is more reasonable, but it addresses only the constancy assumption, not the issue of assay sensitivity. Gaining consensus on the most appropriate approach to the design and analysis of noninferiority trials will require a common understanding of the concept of discounting.
Design sensitivity analysis of boundary element substructures
NASA Technical Reports Server (NTRS)
Kane, James H.; Saigal, Sunil; Gallagher, Richard H.
1989-01-01
The ability to reduce or condense a three-dimensional model exactly, and then iterate on this reduced size model representing the parts of the design that are allowed to change in an optimization loop is discussed. The discussion presents the results obtained from an ongoing research effort to exploit the concept of substructuring within the structural shape optimization context using a Boundary Element Analysis (BEA) formulation. The first part contains a formulation for the exact condensation of portions of the overall boundary element model designated as substructures. The use of reduced boundary element models in shape optimization requires that structural sensitivity analysis can be performed. A reduced sensitivity analysis formulation is then presented that allows for the calculation of structural response sensitivities of both the substructured (reduced) and unsubstructured parts of the model. It is shown that this approach produces significant computational economy in the design sensitivity analysis and reanalysis process by facilitating the block triangular factorization and forward reduction and backward substitution of smaller matrices. The implementatior of this formulation is discussed and timings and accuracies of representative test cases presented.
NASA Astrophysics Data System (ADS)
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2017-12-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina; Ryashko, Lev; Ryazanova, Tatyana
2017-09-01
A problem of the analysis of the noise-induced extinction in multidimensional population systems is considered. For the investigation of conditions of the extinction caused by random disturbances, a new approach based on the stochastic sensitivity function technique and confidence domains is suggested, and applied to tritrophic population model of interacting prey, predator and top predator. This approach allows us to analyze constructively the probabilistic mechanisms of the transition to the noise-induced extinction from both equilibrium and oscillatory regimes of coexistence. In this analysis, a method of principal directions for the reducing of the dimension of confidence domains is suggested. In the dispersion of random states, the principal subspace is defined by the ratio of eigenvalues of the stochastic sensitivity matrix. A detailed analysis of two scenarios of the noise-induced extinction in dependence on parameters of considered tritrophic system is carried out.
Automatic detection of DNA double strand breaks after irradiation using an γH2AX assay.
Hohmann, Tim; Kessler, Jacqueline; Grabiec, Urszula; Bache, Matthias; Vordermark, Dyrk; Dehghani, Faramarz
2018-05-01
Radiation therapy belongs to the most common approaches for cancer therapy leading amongst others to DNA damage like double strand breaks (DSB). DSB can be used as a marker for the effect of radiation on cells. For visualization and assessing the extent of DNA damage the γH2AX foci assay is frequently used. The analysis of the γH2AX foci assay remains complicated as the number of γH2AX foci has to be counted. The quantification is mostly done manually, being time consuming and leading to person-dependent variations. Therefore, we present a method to automatically analyze the number of foci inside nuclei, facilitating and quickening the analysis of DSBs with high reliability in fluorescent images. First nuclei were detected in fluorescent images. Afterwards, the nuclei were analyzed independently from each other with a local thresholding algorithm. This approach allowed accounting for different levels of noise and detection of the foci inside the respective nucleus, using Hough transformation searching for circles. The presented algorithm was able to correctly classify most foci in cases of "high" and "average" image quality (sensitivity>0.8) with a low rate of false positive detections (positive predictive value (PPV)>0.98). In cases of "low" image quality the approach had a decreased sensitivity (0.7-0.9), depending on the manual control counter. The PPV remained high (PPV>0.91). Compared to other automatic approaches the presented algorithm had a higher sensitivity and PPV. The used automatic foci detection algorithm was capable of detecting foci with high sensitivity and PPV. Thus it can be used for automatic analysis of images of varying quality.
Mutel, Christopher L; de Baan, Laura; Hellweg, Stefanie
2013-06-04
Comprehensive sensitivity analysis is a significant tool to interpret and improve life cycle assessment (LCA) models, but is rarely performed. Sensitivity analysis will increase in importance as inventory databases become regionalized, increasing the number of system parameters, and parametrized, adding complexity through variables and nonlinear formulas. We propose and implement a new two-step approach to sensitivity analysis. First, we identify parameters with high global sensitivities for further examination and analysis with a screening step, the method of elementary effects. Second, the more computationally intensive contribution to variance test is used to quantify the relative importance of these parameters. The two-step sensitivity test is illustrated on a regionalized, nonlinear case study of the biodiversity impacts from land use of cocoa production, including a worldwide cocoa products trade model. Our simplified trade model can be used for transformable commodities where one is assessing market shares that vary over time. In the case study, the highly uncertain characterization factors for the Ivory Coast and Ghana contributed more than 50% of variance for almost all countries and years examined. The two-step sensitivity test allows for the interpretation, understanding, and improvement of large, complex, and nonlinear LCA systems.
Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation
NASA Astrophysics Data System (ADS)
Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter
2015-04-01
Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.
Computer aided analysis and optimization of mechanical system dynamics
NASA Technical Reports Server (NTRS)
Haug, E. J.
1984-01-01
The purpose is to outline a computational approach to spatial dynamics of mechanical systems that substantially enlarges the scope of consideration to include flexible bodies, feedback control, hydraulics, and related interdisciplinary effects. Design sensitivity analysis and optimization is the ultimate goal. The approach to computer generation and solution of the system dynamic equations and graphical methods for creating animations as output is outlined.
Computational methods for efficient structural reliability and reliability sensitivity analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.
1993-01-01
This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.
Fujarewicz, Krzysztof; Lakomiec, Krzysztof
2016-12-01
We investigate a spatial model of growth of a tumor and its sensitivity to radiotherapy. It is assumed that the radiation dose may vary in time and space, like in intensity modulated radiotherapy (IMRT). The change of the final state of the tumor depends on local differences in the radiation dose and varies with the time and the place of these local changes. This leads to the concept of a tumor's spatiotemporal sensitivity to radiation, which is a function of time and space. We show how adjoint sensitivity analysis may be applied to calculate the spatiotemporal sensitivity of the finite difference scheme resulting from the partial differential equation describing the tumor growth. We demonstrate results of this approach to the tumor proliferation, invasion and response to radiotherapy (PIRT) model and we compare the accuracy and the computational effort of the method to the simple forward finite difference sensitivity analysis. Furthermore, we use the spatiotemporal sensitivity during the gradient-based optimization of the spatiotemporal radiation protocol and present results for different parameters of the model.
Efficient sensitivity analysis method for chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Liao, Haitao
2016-05-01
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
Mukherjee, Shalini; Yadav, Rajeev; Yung, Iris; Zajdel, Daniel P; Oken, Barry S
2011-10-01
To determine (1) whether heart rate variability (HRV) was a sensitive and reliable measure in mental effort tasks carried out by healthy seniors and (2) whether non-linear approaches to HRV analysis, in addition to traditional time and frequency domain approaches were useful to study such effects. Forty healthy seniors performed two visual working memory tasks requiring different levels of mental effort, while ECG was recorded. They underwent the same tasks and recordings 2 weeks later. Traditional and 13 non-linear indices of HRV including Poincaré, entropy and detrended fluctuation analysis (DFA) were determined. Time domain, especially mean R-R interval (RRI), frequency domain and, among non-linear parameters - Poincaré and DFA were the most reliable indices. Mean RRI, time domain and Poincaré were also the most sensitive to different mental effort task loads and had the largest effect size. Overall, linear measures were the most sensitive and reliable indices to mental effort. In non-linear measures, Poincaré was the most reliable and sensitive, suggesting possible usefulness as an independent marker in cognitive function tasks in healthy seniors. A large number of HRV parameters was both reliable as well as sensitive indices of mental effort, although the simple linear methods were the most sensitive. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Hierarchical Nanogold Labels to Improve the Sensitivity of Lateral Flow Immunoassay
NASA Astrophysics Data System (ADS)
Serebrennikova, Kseniya; Samsonova, Jeanne; Osipov, Alexander
2018-06-01
Lateral flow immunoassay (LFIA) is a widely used express method and offers advantages such as a short analysis time, simplicity of testing and result evaluation. However, an LFIA based on gold nanospheres lacks the desired sensitivity, thereby limiting its wide applications. In this study, spherical nanogold labels along with new types of nanogold labels such as gold nanopopcorns and nanostars were prepared, characterized, and applied for LFIA of model protein antigen procalcitonin. It was found that the label with a structure close to spherical provided more uniform distribution of specific antibodies on its surface, indicative of its suitability for this type of analysis. LFIA using gold nanopopcorns as a label allowed procalcitonin detection over a linear range of 0.5-10 ng mL-1 with the limit of detection of 0.1 ng mL-1, which was fivefold higher than the sensitivity of the assay with gold nanospheres. Another approach to improve the sensitivity of the assay included the silver enhancement method, which was used to compare the amplification of LFIA for procalcitonin detection. The sensitivity of procalcitonin determination by this method was 10 times better the sensitivity of the conventional LFIA with gold nanosphere as a label. The proposed approach of LFIA based on gold nanopopcorns improved the detection sensitivity without additional steps and prevented the increased consumption of specific reagents (antibodies).
NPV Sensitivity Analysis: A Dynamic Excel Approach
ERIC Educational Resources Information Center
Mangiero, George A.; Kraten, Michael
2017-01-01
Financial analysts generally create static formulas for the computation of NPV. When they do so, however, it is not readily apparent how sensitive the value of NPV is to changes in multiple interdependent and interrelated variables. It is the aim of this paper to analyze this variability by employing a dynamic, visually graphic presentation using…
Kalra, Tarandeep S.; Aretxabaleta, Alfredo; Seshadri, Pranay; Ganju, Neil K.; Beudin, Alexis
2017-01-01
Coastal hydrodynamics can be greatly affected by the presence of submerged aquatic vegetation. The effect of vegetation has been incorporated into the Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST) Modeling System. The vegetation implementation includes the plant-induced three-dimensional drag, in-canopy wave-induced streaming, and the production of turbulent kinetic energy by the presence of vegetation. In this study, we evaluate the sensitivity of the flow and wave dynamics to vegetation parameters using Sobol' indices and a least squares polynomial approach referred to as Effective Quadratures method. This method reduces the number of simulations needed for evaluating Sobol' indices and provides a robust, practical, and efficient approach for the parameter sensitivity analysis. The evaluation of Sobol' indices shows that kinetic energy, turbulent kinetic energy, and water level changes are affected by plant density, height, and to a certain degree, diameter. Wave dissipation is mostly dependent on the variation in plant density. Performing sensitivity analyses for the vegetation module in COAWST provides guidance for future observational and modeling work to optimize efforts and reduce exploration of parameter space.
Aircraft optimization by a system approach: Achievements and trends
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1992-01-01
Recently emerging methodology for optimal design of aircraft treated as a system of interacting physical phenomena and parts is examined. The methodology is found to coalesce into methods for hierarchic, non-hierarchic, and hybrid systems all dependent on sensitivity analysis. A separate category of methods has also evolved independent of sensitivity analysis, hence suitable for discrete problems. References and numerical applications are cited. Massively parallel computer processing is seen as enabling technology for practical implementation of the methodology.
Development and application of optimum sensitivity analysis of structures
NASA Technical Reports Server (NTRS)
Barthelemy, J. F. M.; Hallauer, W. L., Jr.
1984-01-01
The research focused on developing an algorithm applying optimum sensitivity analysis for multilevel optimization. The research efforts have been devoted to assisting NASA Langley's Interdisciplinary Research Office (IRO) in the development of a mature methodology for a multilevel approach to the design of complex (large and multidisciplinary) engineering systems. An effort was undertaken to identify promising multilevel optimization algorithms. In the current reporting period, the computer program generating baseline single level solutions was completed and tested out.
Moment-based metrics for global sensitivity analysis of hydrological systems
NASA Astrophysics Data System (ADS)
Dell'Oca, Aronne; Riva, Monica; Guadagnini, Alberto
2017-12-01
We propose new metrics to assist global sensitivity analysis, GSA, of hydrological and Earth systems. Our approach allows assessing the impact of uncertain parameters on main features of the probability density function, pdf, of a target model output, y. These include the expected value of y, the spread around the mean and the degree of symmetry and tailedness of the pdf of y. Since reliable assessment of higher-order statistical moments can be computationally demanding, we couple our GSA approach with a surrogate model, approximating the full model response at a reduced computational cost. Here, we consider the generalized polynomial chaos expansion (gPCE), other model reduction techniques being fully compatible with our theoretical framework. We demonstrate our approach through three test cases, including an analytical benchmark, a simplified scenario mimicking pumping in a coastal aquifer and a laboratory-scale conservative transport experiment. Our results allow ascertaining which parameters can impact some moments of the model output pdf while being uninfluential to others. We also investigate the error associated with the evaluation of our sensitivity metrics by replacing the original system model through a gPCE. Our results indicate that the construction of a surrogate model with increasing level of accuracy might be required depending on the statistical moment considered in the GSA. The approach is fully compatible with (and can assist the development of) analysis techniques employed in the context of reduction of model complexity, model calibration, design of experiment, uncertainty quantification and risk assessment.
Model-based POD study of manual ultrasound inspection and sensitivity analysis using metamodel
NASA Astrophysics Data System (ADS)
Ribay, Guillemette; Artusi, Xavier; Jenson, Frédéric; Reece, Christopher; Lhuillier, Pierre-Emile
2016-02-01
The reliability of NDE can be quantified by using the Probability of Detection (POD) approach. Former studies have shown the potential of the model-assisted POD (MAPOD) approach to replace expensive experimental determination of POD curves. In this paper, we make use of CIVA software to determine POD curves for a manual ultrasonic inspection of a heavy component, for which a whole experimental POD campaign was not available. The influential parameters were determined by expert analysis. The semi-analytical models used in CIVA for wave propagation and beam-defect interaction have been validated in the range of variation of the influential parameters by comparison with finite element modelling (Athena). The POD curves are computed for « hit/miss » and « â versus a » analysis. The verification of Berens hypothesis is evaluated by statistical tools. A sensitivity study is performed to measure the relative influence of parameters on the defect response amplitude variance, using the Sobol sensitivity index. A meta-model is also built to reduce computing cost and enhance the precision of estimated index.
Schlattmann, Peter; Verba, Maryna; Dewey, Marc; Walther, Mario
2015-01-01
Bivariate linear and generalized linear random effects are frequently used to perform a diagnostic meta-analysis. The objective of this article was to apply a finite mixture model of bivariate normal distributions that can be used for the construction of componentwise summary receiver operating characteristic (sROC) curves. Bivariate linear random effects and a bivariate finite mixture model are used. The latter model is developed as an extension of a univariate finite mixture model. Two examples, computed tomography (CT) angiography for ruling out coronary artery disease and procalcitonin as a diagnostic marker for sepsis, are used to estimate mean sensitivity and mean specificity and to construct sROC curves. The suggested approach of a bivariate finite mixture model identifies two latent classes of diagnostic accuracy for the CT angiography example. Both classes show high sensitivity but mainly two different levels of specificity. For the procalcitonin example, this approach identifies three latent classes of diagnostic accuracy. Here, sensitivities and specificities are quite different as such that sensitivity increases with decreasing specificity. Additionally, the model is used to construct componentwise sROC curves and to classify individual studies. The proposed method offers an alternative approach to model between-study heterogeneity in a diagnostic meta-analysis. Furthermore, it is possible to construct sROC curves even if a positive correlation between sensitivity and specificity is present. Copyright © 2015 Elsevier Inc. All rights reserved.
Pagès, Pierre-Benoit; Delpy, Jean-Philippe; Orsini, Bastien; Gossot, Dominique; Baste, Jean-Marc; Thomas, Pascal; Dahan, Marcel; Bernard, Alain
2016-04-01
Video-assisted thoracoscopic surgery (VATS) lobectomy has recently become the recommended approach for stage I non-small cell lung cancer. However, these guidelines are not based on any large randomized control trial. Our study used propensity scores and a sensitivity analysis to compare VATS lobectomy with open thoracotomy. From 2005 to 2012, 24,811 patients (95.1%) were operated on by open thoracotomy and 1,278 (4.9%) by VATS. The end points were 30-day postoperative death, postoperative complications, hospital stay, overall survival, and disease-free survival. Two propensity scores analyses were performed: matching and inverse probability of treatment weighting, and one sensitivity analysis to unmask potential hidden bias. A subgroup analysis was performed to compare "high-risk" with "low-risk" patients. Results are reported by odds ratios or hazard ratios and their 95% confidence intervals. Postoperative death was not significantly reduced by VATS whatever the analysis. Concerning postoperative complications, VATS significantly decreased the occurrence of atelectasis and pneumopathy with both analysis methods, but there were no differences in the occurrence of other postoperative complications. VATS did not provide a benefit for high-risk patients. The VATS approach decreased the hospital length of stay from 2.4 days (95% confidence interval, -1.7 to -3 days) to -4.68 days (95% confidence interval, -8.5 to 0.9 days). Overall survival and disease-free survival were not influenced by the surgical approach. The sensitivity analysis showed potential biases. The results must be interpreted carefully because of the differences observed according to the propensity scores method used. A multicenter randomized controlled trial is necessary to limit the biases. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
The Effects of Variability and Risk in Selection Utility Analysis: An Empirical Comparison.
ERIC Educational Resources Information Center
Rich, Joseph R.; Boudreau, John W.
1987-01-01
Investigated utility estimate variability for the selection utility of using the Programmer Aptitude Test to select computer programmers. Comparison of Monte Carlo results to other risk assessment approaches (sensitivity analysis, break-even analysis, algebraic derivation of the distribtion) suggests that distribution information provided by Monte…
Hoyt, Michael A; McCann, Connor; Savone, Mirko; Saigal, Christopher S; Stanton, Annette L
2015-12-01
Interpersonal sensitivity is characterized by the predisposition to perceive and elicit criticism, rejection, and negative social evaluation. It may be linked to poorer physical or functional health outcomes, particularly in the interpersonal context (cancer-related sexual dysfunction). This study tested the association of interpersonal sensitivity with sexual functioning following testicular cancer in young men and whether this association is moderated by coping processes. Men ages 18 to 29 (N = 171; M age = 25.2, SD = 3.32) with a history of testicular cancer were recruited via the California State Cancer Registry and completed questionnaire measures including assessments of interpersonal sensitivity, sexual functioning, and approach and avoidance coping. Regression analysis controlling for education, age, partner status, ethnic status, and time since diagnosis revealed that higher interpersonal sensitivity was significantly related to lower sexual functioning (β = -0.18, p < 0.05). Cancer-related approach-oriented coping was associated with better sexual functioning (β = 0.19, p < 0.05). No significant association was observed for avoidance coping (β = -0.08, ns). Approach-oriented coping, but not avoidance, moderated the relationship with sexual functioning (β = 0.19, p < 0.05), such that higher interpersonal sensitivity was more strongly associated with lower functioning among men with relatively low use of approach coping. Interpersonal sensitivity may be an important individual difference in vulnerability to sexual dysfunction after testicular cancer. Enhancement of coping skills may be a useful direction for intervention development for interpersonally sensitive young men with cancer.
Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V.; Petway, Joy R.
2017-01-01
This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH3-N and NO3-N. Results indicate that the integrated FME-GLUE-based model, with good Nash–Sutcliffe coefficients (0.53–0.69) and correlation coefficients (0.76–0.83), successfully simulates the concentrations of ON-N, NH3-N and NO3-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH3-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO3-N simulation, which was measured using global sensitivity. PMID:28704958
Decision analysis in clinical cardiology: When is coronary angiography required in aortic stenosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Georgeson, S.; Meyer, K.B.; Pauker, S.G.
1990-03-15
Decision analysis offers a reproducible, explicit approach to complex clinical decisions. It consists of developing a model, typically a decision tree, that separates choices from chances and that specifies and assigns relative values to outcomes. Sensitivity analysis allows exploration of alternative assumptions. Cost-effectiveness analysis shows the relation between dollars spent and improved health outcomes achieved. In a tutorial format, this approach is applied to the decision whether to perform coronary angiography in a patient who requires aortic valve replacement for critical aortic stenosis.
Approaching the Limit in Atomic Spectrochemical Analysis.
ERIC Educational Resources Information Center
Hieftje, Gary M.
1982-01-01
To assess the ability of current analytical methods to approach the single-atom detection level, theoretical and experimentally determined detection levels are presented for several chemical elements. A comparison of these methods shows that the most sensitive atomic spectrochemical technique currently available is based on emission from…
NASA Astrophysics Data System (ADS)
Safeeq, M.; Grant, G. E.; Lewis, S. L.; Kramer, M. G.; Staab, B.
2014-09-01
Summer streamflows in the Pacific Northwest are largely derived from melting snow and groundwater discharge. As the climate warms, diminishing snowpack and earlier snowmelt will cause reductions in summer streamflow. Most regional-scale assessments of climate change impacts on streamflow use downscaled temperature and precipitation projections from general circulation models (GCMs) coupled with large-scale hydrologic models. Here we develop and apply an analytical hydrogeologic framework for characterizing summer streamflow sensitivity to a change in the timing and magnitude of recharge in a spatially explicit fashion. In particular, we incorporate the role of deep groundwater, which large-scale hydrologic models generally fail to capture, into streamflow sensitivity assessments. We validate our analytical streamflow sensitivities against two empirical measures of sensitivity derived using historical observations of temperature, precipitation, and streamflow from 217 watersheds. In general, empirically and analytically derived streamflow sensitivity values correspond. Although the selected watersheds cover a range of hydrologic regimes (e.g., rain-dominated, mixture of rain and snow, and snow-dominated), sensitivity validation was primarily driven by the snow-dominated watersheds, which are subjected to a wider range of change in recharge timing and magnitude as a result of increased temperature. Overall, two patterns emerge from this analysis: first, areas with high streamflow sensitivity also have higher summer streamflows as compared to low-sensitivity areas. Second, the level of sensitivity and spatial extent of highly sensitive areas diminishes over time as the summer progresses. Results of this analysis point to a robust, practical, and scalable approach that can help assess risk at the landscape scale, complement the downscaling approach, be applied to any climate scenario of interest, and provide a framework to assist land and water managers in adapting to an uncertain and potentially challenging future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Chen, Xingyuan; Ye, Ming
Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level ofmore » the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.« less
Hatem, S M; Hu, L; Ragé, M; Gierasimowicz, A; Plaghki, L; Bouhassira, D; Attal, N; Iannetti, G D; Mouraux, A
2012-12-01
To assess the clinical usefulness of an automated analysis of event-related potentials (ERPs). Nociceptive laser-evoked potentials (LEPs) and non-nociceptive somatosensory electrically-evoked potentials (SEPs) were recorded in 37 patients with syringomyelia and 21 controls. LEP and SEP peak amplitudes and latencies were estimated using a single-trial automated approach based on time-frequency wavelet filtering and multiple linear regression, as well as a conventional approach based on visual inspection. The amplitudes and latencies of normal and abnormal LEP and SEP peaks were identified reliably using both approaches, with similar sensitivity and specificity. Because the automated approach provided an unbiased solution to account for average waveforms where no ERP could be identified visually, it revealed significant differences between patients and controls that were not revealed using the visual approach. The automated analysis of ERPs characterized reliably and objectively LEP and SEP waveforms in patients. The automated single-trial analysis can be used to characterize normal and abnormal ERPs with a similar sensitivity and specificity as visual inspection. While this does not justify its use in a routine clinical setting, the technique could be useful to avoid observer-dependent biases in clinical research. Copyright © 2012 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Sensitivity and Uncertainty Analysis of the GFR MOX Fuel Subassembly
NASA Astrophysics Data System (ADS)
Lüley, J.; Vrban, B.; Čerba, Š.; Haščík, J.; Nečas, V.; Pelloni, S.
2014-04-01
We performed sensitivity and uncertainty analysis as well as benchmark similarity assessment of the MOX fuel subassembly designed for the Gas-Cooled Fast Reactor (GFR) as a representative material of the core. Material composition was defined for each assembly ring separately allowing us to decompose the sensitivities not only for isotopes and reactions but also for spatial regions. This approach was confirmed by direct perturbation calculations for chosen materials and isotopes. Similarity assessment identified only ten partly comparable benchmark experiments that can be utilized in the field of GFR development. Based on the determined uncertainties, we also identified main contributors to the calculation bias.
Comparing methods for analysis of biomedical hyperspectral image data
NASA Astrophysics Data System (ADS)
Leavesley, Silas J.; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter F.; Annamdevula, Naga S.; Rich, Thomas C.
2017-02-01
Over the past 2 decades, hyperspectral imaging technologies have been adapted to address the need for molecule-specific identification in the biomedical imaging field. Applications have ranged from single-cell microscopy to whole-animal in vivo imaging and from basic research to clinical systems. Enabling this growth has been the availability of faster, more effective hyperspectral filtering technologies and more sensitive detectors. Hence, the potential for growth of biomedical hyperspectral imaging is high, and many hyperspectral imaging options are already commercially available. However, despite the growth in hyperspectral technologies for biomedical imaging, little work has been done to aid users of hyperspectral imaging instruments in selecting appropriate analysis algorithms. Here, we present an approach for comparing the effectiveness of spectral analysis algorithms by combining experimental image data with a theoretical "what if" scenario. This approach allows us to quantify several key outcomes that characterize a hyperspectral imaging study: linearity of sensitivity, positive detection cut-off slope, dynamic range, and false positive events. We present results of using this approach for comparing the effectiveness of several common spectral analysis algorithms for detecting weak fluorescent protein emission in the midst of strong tissue autofluorescence. Results indicate that this approach should be applicable to a very wide range of applications, allowing a quantitative assessment of the effectiveness of the combined biology, hardware, and computational analysis for detecting a specific molecular signature.
A novel bi-level meta-analysis approach: applied to biological pathway analysis.
Nguyen, Tin; Tagett, Rebecca; Donato, Michele; Mitrea, Cristina; Draghici, Sorin
2016-02-01
The accumulation of high-throughput data in public repositories creates a pressing need for integrative analysis of multiple datasets from independent experiments. However, study heterogeneity, study bias, outliers and the lack of power of available methods present real challenge in integrating genomic data. One practical drawback of many P-value-based meta-analysis methods, including Fisher's, Stouffer's, minP and maxP, is that they are sensitive to outliers. Another drawback is that, because they perform just one statistical test for each individual experiment, they may not fully exploit the potentially large number of samples within each study. We propose a novel bi-level meta-analysis approach that employs the additive method and the Central Limit Theorem within each individual experiment and also across multiple experiments. We prove that the bi-level framework is robust against bias, less sensitive to outliers than other methods, and more sensitive to small changes in signal. For comparative analysis, we demonstrate that the intra-experiment analysis has more power than the equivalent statistical test performed on a single large experiment. For pathway analysis, we compare the proposed framework versus classical meta-analysis approaches (Fisher's, Stouffer's and the additive method) as well as against a dedicated pathway meta-analysis package (MetaPath), using 1252 samples from 21 datasets related to three human diseases, acute myeloid leukemia (9 datasets), type II diabetes (5 datasets) and Alzheimer's disease (7 datasets). Our framework outperforms its competitors to correctly identify pathways relevant to the phenotypes. The framework is sufficiently general to be applied to any type of statistical meta-analysis. The R scripts are available on demand from the authors. sorin@wayne.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem
Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; ...
2015-01-01
In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.
Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.
In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.
NASA Astrophysics Data System (ADS)
Yu, Maolin; Du, R.
2005-08-01
Sheet metal stamping is one of the most commonly used manufacturing processes, and hence, much research has been carried for economic gain. Searching through the literatures, however, it is found that there are still a lots of problems unsolved. For example, it is well known that for a same press, same workpiece material, and same set of die, the product quality may vary owing to a number of factors, such as the inhomogeneous of the workpice material, the loading error, the lubrication, and etc. Presently, few seem able to predict the quality variation, not to mention what contribute to the quality variation. As a result, trial-and-error is still needed in the shop floor, causing additional cost and time delay. This paper introduces a new approach to predict the product quality variation and identify the sensitive design / process parameters. The new approach is based on a combination of inverse Finite Element Modeling (FEM) and Monte Carlo Simulation (more specifically, the Latin Hypercube Sampling (LHS) approach). With an acceptable accuracy, the inverse FEM (also called one-step FEM) requires much less computation load than that of the usual incremental FEM and hence, can be used to predict the quality variations under various conditions. LHS is a statistical method, through which the sensitivity analysis can be carried out. The result of the sensitivity analysis has clear physical meaning and can be used to optimize the die design and / or the process design. Two simulation examples are presented including drawing a rectangular box and drawing a two-step rectangular box.
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
Efficient sensitivity analysis and optimization of a helicopter rotor
NASA Technical Reports Server (NTRS)
Lim, Joon W.; Chopra, Inderjit
1989-01-01
Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.
NASA Astrophysics Data System (ADS)
Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris
2015-04-01
Many geoscience applications can benefit from testing many combinations of input parameters for geochemical simulation models. It is, however, a challenge to screen the input and output data from the model to identify the significant relationships between input parameters and output variables. For addressing this problem we propose a Visual Analytics approach that has been developed in an ongoing collaboration between computer science and geoscience researchers. Our Visual Analytics approach uses visualization methods of hierarchical horizontal axis, multi-factor stacked bar charts and interactive semi-automated filtering for input and output data together with automatic sensitivity analysis. This guides the users towards significant relationships. We implement our approach as an interactive data exploration tool. It is designed with flexibility in mind, so that a diverse set of tasks such as inverse modeling, sensitivity analysis and model parameter refinement can be supported. Here we demonstrate the capabilities of our approach by two examples for gas storage applications. For the first example our Visual Analytics approach enabled the analyst to observe how the element concentrations change around previously established baselines in response to thousands of different combinations of mineral phases. This supported combinatorial inverse modeling for interpreting observations about the chemical composition of the formation fluids at the Ketzin pilot site for CO2 storage. The results indicate that, within the experimental error range, the formation fluid cannot be considered at local thermodynamical equilibrium with the mineral assemblage of the reservoir rock. This is a valuable insight from the predictive geochemical modeling for the Ketzin site. For the second example our approach supports sensitivity analysis for a reaction involving the reductive dissolution of pyrite with formation of pyrrothite in presence of gaseous hydrogen. We determine that this reaction is thermodynamically favorable under a broad range of conditions. This includes low temperatures and absence of microbial catalysators. Our approach has potential for use in other applications that involve exploration of relationships in geochemical simulation model data.
Edwards, D. L.; Saleh, A. A.; Greenspan, S. L.
2015-01-01
Summary We performed a systematic review and meta-analysis of the performance of clinical risk assessment instruments for screening for DXA-determined osteoporosis or low bone density. Commonly evaluated risk instruments showed high sensitivity approaching or exceeding 90 % at particular thresholds within various populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. Introduction The purpose of the study is to systematically review the performance of clinical risk assessment instruments for screening for dual-energy X-ray absorptiometry (DXA)-determined osteoporosis or low bone density. Methods Systematic review and meta-analysis were performed. Multiple literature sources were searched, and data extracted and analyzed from included references. Results One hundred eight references met inclusion criteria. Studies assessed many instruments in 34 countries, most commonly the Osteoporosis Self-Assessment Tool (OST), the Simple Calculated Osteoporosis Risk Estimation (SCORE) instrument, the Osteoporosis Self-Assessment Tool for Asians (OSTA), the Osteoporosis Risk Assessment Instrument (ORAI), and body weight criteria. Meta-analyses of studies evaluating OST using a cutoff threshold of <1 to identify US postmenopausal women with osteoporosis at the femoral neck provided summary sensitivity and specificity estimates of 89 % (95%CI 82–96 %) and 41 % (95%CI 23–59 %), respectively. Meta-analyses of studies evaluating OST using a cutoff threshold of 3 to identify US men with osteoporosis at the femoral neck, total hip, or lumbar spine provided summary sensitivity and specificity estimates of 88 % (95%CI 79–97 %) and 55 % (95%CI 42–68 %), respectively. Frequently evaluated instruments each had thresholds and populations for which sensitivity for osteoporosis or low bone mass detection approached or exceeded 90 % but always with a trade-off of relatively low specificity. Conclusions Commonly evaluated clinical risk assessment instruments each showed high sensitivity approaching or exceeding 90 % for identifying individuals with DXA-determined osteoporosis or low BMD at certain thresholds in different populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. PMID:25644147
Time to angiographic reperfusion in acute ischemic stroke: decision analysis.
Vagal, Achala S; Khatri, Pooja; Broderick, Joseph P; Tomsick, Thomas A; Yeatts, Sharon D; Eckman, Mark H
2014-12-01
Our objective was to use decision analytic modeling to compare 2 treatment strategies of intravenous recombinant tissue-type plasminogen activator (r-tPA) alone versus combined intravenous r-tPA/endovascular therapy in a subgroup of patients with large vessel (internal carotid artery terminus, M1, and M2) occlusion based on varying times to angiographic reperfusion and varying rates of reperfusion. We developed a decision model using Interventional Management of Stroke (IMS) III trial data and comprehensive literature review. We performed 1-way sensitivity analyses for time to reperfusion and 2-way sensitivity for time to reperfusion and rate of reperfusion success. We also performed probabilistic sensitivity analyses to address uncertainty in total time to reperfusion for the endovascular approach. In the base case, endovascular approach yielded a higher expected utility (6.38 quality-adjusted life years) than the intravenous-only arm (5.42 quality-adjusted life years). One-way sensitivity analyses demonstrated superiority of endovascular treatment to intravenous-only arm unless time to reperfusion exceeded 347 minutes. Two-way sensitivity analysis demonstrated that endovascular treatment was preferred when probability of reperfusion is high and time to reperfusion is small. Probabilistic sensitivity results demonstrated an average gain for endovascular therapy of 0.76 quality-adjusted life years (SD 0.82) compared with the intravenous-only approach. In our post hoc model with its underlying limitations, endovascular therapy after intravenous r-tPA is the preferred treatment as compared with intravenous r-tPA alone. However, if time to reperfusion exceeds 347 minutes, intravenous r-tPA alone is the recommended strategy. This warrants validation in a randomized, prospective trial among patients with large vessel occlusions. © 2014 American Heart Association, Inc.
Hutsell, Blake A; Negus, S Stevens; Banks, Matthew L
2015-01-01
We have previously demonstrated reductions in cocaine choice produced by either continuous 14-day phendimetrazine and d-amphetamine treatment or removing cocaine availability under a cocaine vs. food choice procedure in rhesus monkeys. The aim of the present investigation was to apply the concatenated generalized matching law (GML) to cocaine vs. food choice dose-effect functions incorporating sensitivity to both the relative magnitude and price of each reinforcer. Our goal was to determine potential behavioral mechanisms underlying pharmacological treatment efficacy to decrease cocaine choice. A multi-model comparison approach was used to characterize dose- and time-course effects of both pharmacological and environmental manipulations on sensitivity to reinforcement. GML models provided an excellent fit of the cocaine choice dose-effect functions in individual monkeys. Reductions in cocaine choice by both pharmacological and environmental manipulations were principally produced by systematic decreases in sensitivity to reinforcer price and non-systematic changes in sensitivity to reinforcer magnitude. The modeling approach used provides a theoretical link between the experimental analysis of choice and pharmacological treatments being evaluated as candidate 'agonist-based' medications for cocaine addiction. The analysis suggests that monoamine releaser treatment efficacy to decrease cocaine choice was mediated by selectively increasing the relative price of cocaine. Overall, the net behavioral effect of these pharmacological treatments was to increase substitutability of food pellets, a nondrug reinforcer, for cocaine. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fu, Rongxin; Li, Qi; Zhang, Junqi; Wang, Ruliang; Lin, Xue; Xue, Ning; Su, Ya; Jiang, Kai; Huang, Guoliang
2016-10-01
Label free point mutation detection is particularly momentous in the area of biomedical research and clinical diagnosis since gene mutations naturally occur and bring about highly fatal diseases. In this paper, a label free and high sensitive approach is proposed for point mutation detection based on hyperspectral interferometry. A hybridization strategy is designed to discriminate a single-base substitution with sequence-specific DNA ligase. Double-strand structures will take place only if added oligonucleotides are perfectly paired to the probe sequence. The proposed approach takes full use of the inherent conformation of double-strand DNA molecules on the substrate and a spectrum analysis method is established to point out the sub-nanoscale thickness variation, which benefits to high sensitive mutation detection. The limit of detection reach 4pg/mm2 according to the experimental result. A lung cancer gene point mutation was demonstrated, proving the high selectivity and multiplex analysis capability of the proposed biosensor.
Digital imaging biomarkers feed machine learning for melanoma screening.
Gareau, Daniel S; Correa da Rosa, Joel; Yagerman, Sarah; Carucci, John A; Gulati, Nicholas; Hueto, Ferran; DeFazio, Jennifer L; Suárez-Fariñas, Mayte; Marghoob, Ashfaq; Krueger, James G
2017-07-01
We developed an automated approach for generating quantitative image analysis metrics (imaging biomarkers) that are then analysed with a set of 13 machine learning algorithms to generate an overall risk score that is called a Q-score. These methods were applied to a set of 120 "difficult" dermoscopy images of dysplastic nevi and melanomas that were subsequently excised/classified. This approach yielded 98% sensitivity and 36% specificity for melanoma detection, approaching sensitivity/specificity of expert lesion evaluation. Importantly, we found strong spectral dependence of many imaging biomarkers in blue or red colour channels, suggesting the need to optimize spectral evaluation of pigmented lesions. © 2016 The Authors. Experimental Dermatology Published by John Wiley & Sons Ltd.
On Distributed PV Hosting Capacity Estimation, Sensitivity Study, and Improvement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Fei; Mather, Barry
This paper first studies the estimated distributed PV hosting capacities of seventeen utility distribution feeders using the Monte Carlo simulation based stochastic analysis, and then analyzes the sensitivity of PV hosting capacity to both feeder and photovoltaic system characteristics. Furthermore, an active distribution network management approach is proposed to maximize PV hosting capacity by optimally switching capacitors, adjusting voltage regulator taps, managing controllable branch switches and controlling smart PV inverters. The approach is formulated as a mixed-integer nonlinear optimization problem and a genetic algorithm is developed to obtain the solution. Multiple simulation cases are studied and the effectiveness of themore » proposed approach on increasing PV hosting capacity is demonstrated.« less
Giustacchini, Alice; Thongjuea, Supat; Barkas, Nikolaos; Woll, Petter S; Povinelli, Benjamin J; Booth, Christopher A G; Sopp, Paul; Norfo, Ruggiero; Rodriguez-Meira, Alba; Ashley, Neil; Jamieson, Lauren; Vyas, Paresh; Anderson, Kristina; Segerstolpe, Åsa; Qian, Hong; Olsson-Strömberg, Ulla; Mustjoki, Satu; Sandberg, Rickard; Jacobsen, Sten Eirik W; Mead, Adam J
2017-06-01
Recent advances in single-cell transcriptomics are ideally placed to unravel intratumoral heterogeneity and selective resistance of cancer stem cell (SC) subpopulations to molecularly targeted cancer therapies. However, current single-cell RNA-sequencing approaches lack the sensitivity required to reliably detect somatic mutations. We developed a method that combines high-sensitivity mutation detection with whole-transcriptome analysis of the same single cell. We applied this technique to analyze more than 2,000 SCs from patients with chronic myeloid leukemia (CML) throughout the disease course, revealing heterogeneity of CML-SCs, including the identification of a subgroup of CML-SCs with a distinct molecular signature that selectively persisted during prolonged therapy. Analysis of nonleukemic SCs from patients with CML also provided new insights into cell-extrinsic disruption of hematopoiesis in CML associated with clinical outcome. Furthermore, we used this single-cell approach to identify a blast-crisis-specific SC population, which was also present in a subclone of CML-SCs during the chronic phase in a patient who subsequently developed blast crisis. This approach, which might be broadly applied to any malignancy, illustrates how single-cell analysis can identify subpopulations of therapy-resistant SCs that are not apparent through cell-population analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C. S.; Zhang, Hongbin
Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less
Probabilistic Aeroelastic Analysis of Turbomachinery Components
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Mital, S. K.; Stefko, G. L.
2004-01-01
A probabilistic approach is described for aeroelastic analysis of turbomachinery blade rows. Blade rows with subsonic flow and blade rows with supersonic flow with subsonic leading edge are considered. To demonstrate the probabilistic approach, the flutter frequency, damping and forced response of a blade row representing a compressor geometry is considered. The analysis accounts for uncertainties in structural and aerodynamic design variables. The results are presented in the form of probabilistic density function (PDF) and sensitivity factors. For subsonic flow cascade, comparisons are also made with different probabilistic distributions, probabilistic methods, and Monte-Carlo simulation. The approach shows that the probabilistic approach provides a more realistic and systematic way to assess the effect of uncertainties in design variables on the aeroelastic instabilities and response.
Bartsch, Sarah M; Umscheid, Craig A; Nachamkin, Irving; Hamilton, Keith; Lee, Bruce Y
2015-01-01
Accurate diagnosis of Clostridium difficile infection (CDI) is essential to effectively managing patients and preventing transmission. Despite the availability of several diagnostic tests, the optimal strategy is debatable and their economic values are unknown. We modified our previously existing C. difficile simulation model to determine the economic value of different CDI diagnostic approaches from the hospital perspective. We evaluated four diagnostic methods for a patient suspected of having CDI: 1) toxin A/B enzyme immunoassay, 2) glutamate dehydrogenase (GDH) antigen/toxin AB combined in one test, 3) nucleic acid amplification test (NAAT), and 4) GDH antigen/toxin AB combination test with NAAT confirmation of indeterminate results. Sensitivity analysis varied the proportion of those tested with clinically significant diarrhoea, the probability of CDI, NAAT cost and CDI treatment delay resulting from a false-negative test, length of stay and diagnostic sensitivity and specificity. The GDH/toxin AB plus NAAT approach leads to the timeliest treatment with the fewest unnecessary treatments given, resulted in the best bed management and generated the lowest cost. The NAAT-alone approach also leads to timely treatment. The GDH/toxin AB diagnostic (without NAAT confirmation) approach resulted in a large number of delayed treatments, but results in the fewest secondary colonisations. Results were robust to the sensitivity analysis. Choosing the right diagnostic approach is a matter of cost and test accuracy. GDH/toxin AB plus NAAT diagnosis led to the timeliest treatment and was the least costly. Copyright © 2014 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
2018-01-01
Effect-directed analysis (EDA) is a commonly used approach for effect-based identification of endocrine disruptive chemicals in complex (environmental) mixtures. However, for routine toxicity assessment of, for example, water samples, current EDA approaches are considered time-consuming and laborious. We achieved faster EDA and identification by downscaling of sensitive cell-based hormone reporter gene assays and increasing fractionation resolution to allow testing of smaller fractions with reduced complexity. The high-resolution EDA approach is demonstrated by analysis of four environmental passive sampler extracts. Downscaling of the assays to a 384-well format allowed analysis of 64 fractions in triplicate (or 192 fractions without technical replicates) without affecting sensitivity compared to the standard 96-well format. Through a parallel exposure method, agonistic and antagonistic androgen and estrogen receptor activity could be measured in a single experiment following a single fractionation. From 16 selected candidate compounds, identified through nontargeted analysis, 13 could be confirmed chemically and 10 were found to be biologically active, of which the most potent nonsteroidal estrogens were identified as oxybenzone and piperine. The increased fractionation resolution and the higher throughput that downscaling provides allow for future application in routine high-resolution screening of large numbers of samples in order to accelerate identification of (emerging) endocrine disruptors. PMID:29547277
Eigenvalue sensitivity analysis of planar frames with variable joint and support locations
NASA Technical Reports Server (NTRS)
Chuang, Ching H.; Hou, Gene J. W.
1991-01-01
Two sensitivity equations are derived in this study based upon the continuum approach for eigenvalue sensitivity analysis of planar frame structures with variable joint and support locations. A variational form of an eigenvalue equation is first derived in which all of the quantities are expressed in the local coordinate system attached to each member. Material derivative of this variational equation is then sought to account for changes in member's length and orientation resulting form the perturbation of joint and support locations. Finally, eigenvalue sensitivity equations are formulated in either domain quantities (by the domain method) or boundary quantities (by the boundary method). It is concluded that the sensitivity equation derived by the boundary method is more efficient in computation but less accurate than that of the domain method. Nevertheless, both of them in terms of computational efficiency are superior to the conventional direct differentiation method and the finite difference method.
2014-01-01
Background Recent innovations in sequencing technologies have provided researchers with the ability to rapidly characterize the microbial content of an environmental or clinical sample with unprecedented resolution. These approaches are producing a wealth of information that is providing novel insights into the microbial ecology of the environment and human health. However, these sequencing-based approaches produce large and complex datasets that require efficient and sensitive computational analysis workflows. Many recent tools for analyzing metagenomic-sequencing data have emerged, however, these approaches often suffer from issues of specificity, efficiency, and typically do not include a complete metagenomic analysis framework. Results We present PathoScope 2.0, a complete bioinformatics framework for rapidly and accurately quantifying the proportions of reads from individual microbial strains present in metagenomic sequencing data from environmental or clinical samples. The pipeline performs all necessary computational analysis steps; including reference genome library extraction and indexing, read quality control and alignment, strain identification, and summarization and annotation of results. We rigorously evaluated PathoScope 2.0 using simulated data and data from the 2011 outbreak of Shiga-toxigenic Escherichia coli O104:H4. Conclusions The results show that PathoScope 2.0 is a complete, highly sensitive, and efficient approach for metagenomic analysis that outperforms alternative approaches in scope, speed, and accuracy. The PathoScope 2.0 pipeline software is freely available for download at: http://sourceforge.net/projects/pathoscope/. PMID:25225611
Leavesley, Silas J; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter; Rich, Thomas C
2018-01-01
Spectral imaging technologies have been used for many years by the remote sensing community. More recently, these approaches have been applied to biomedical problems, where they have shown great promise. However, biomedical spectral imaging has been complicated by the high variance of biological data and the reduced ability to construct test scenarios with fixed ground truths. Hence, it has been difficult to objectively assess and compare biomedical spectral imaging assays and technologies. Here, we present a standardized methodology that allows assessment of the performance of biomedical spectral imaging equipment, assays, and analysis algorithms. This methodology incorporates real experimental data and a theoretical sensitivity analysis, preserving the variability present in biomedical image data. We demonstrate that this approach can be applied in several ways: to compare the effectiveness of spectral analysis algorithms, to compare the response of different imaging platforms, and to assess the level of target signature required to achieve a desired performance. Results indicate that it is possible to compare even very different hardware platforms using this methodology. Future applications could include a range of optimization tasks, such as maximizing detection sensitivity or acquisition speed, providing high utility for investigators ranging from design engineers to biomedical scientists. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming
2016-01-01
Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.
Silvano, Amy; Guyer, Craig; Steury, Todd; Grand, James B.
2017-01-01
Most imperiled species are rare or elusive and difficult to detect, which makes gathering data to estimate their response to habitat restoration a challenge. We used a repeatable, systematic method for selecting focal species using relative sensitivities derived from occupancy analysis. Our objective was to select suites of focal species that would be useful as surrogates when predicting effects of restoration of habitat characteristics preferred by imperiled species. We developed 27 habitat profiles that represent general habitat relationships for 118 imperiled species. We identified 23 regularly encountered species that were sensitive to important aspects of those profiles. We validated our approach by examining the correlation between estimated probabilities of occupancy for species of concern and focal species selected using our method. Occupancy rates of focal species were more related to occupancy rates of imperiled species when they were sensitive to more of the parameters appearing in profiles of imperiled species. We suggest that this approach can be an effective means of predicting responses by imperiled species to proposed management actions. However, adequate monitoring will be required to determine the effectiveness of using focal species to guide management actions.
NASA Astrophysics Data System (ADS)
Kalra, Tarandeep S.; Aretxabaleta, Alfredo; Seshadri, Pranay; Ganju, Neil K.; Beudin, Alexis
2017-12-01
Coastal hydrodynamics can be greatly affected by the presence of submerged aquatic vegetation. The effect of vegetation has been incorporated into the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST) modeling system. The vegetation implementation includes the plant-induced three-dimensional drag, in-canopy wave-induced streaming, and the production of turbulent kinetic energy by the presence of vegetation. In this study, we evaluate the sensitivity of the flow and wave dynamics to vegetation parameters using Sobol' indices and a least squares polynomial approach referred to as the Effective Quadratures method. This method reduces the number of simulations needed for evaluating Sobol' indices and provides a robust, practical, and efficient approach for the parameter sensitivity analysis. The evaluation of Sobol' indices shows that kinetic energy, turbulent kinetic energy, and water level changes are affected by plant stem density, height, and, to a lesser degree, diameter. Wave dissipation is mostly dependent on the variation in plant stem density. Performing sensitivity analyses for the vegetation module in COAWST provides guidance to optimize efforts and reduce exploration of parameter space for future observational and modeling work.
Sensitivity Equation Derivation for Transient Heat Transfer Problems
NASA Technical Reports Server (NTRS)
Hou, Gene; Chien, Ta-Cheng; Sheen, Jeenson
2004-01-01
The focus of the paper is on the derivation of sensitivity equations for transient heat transfer problems modeled by different discretization processes. Two examples will be used in this study to facilitate the discussion. The first example is a coupled, transient heat transfer problem that simulates the press molding process in fabrication of composite laminates. These state equations are discretized into standard h-version finite elements and solved by a multiple step, predictor-corrector scheme. The sensitivity analysis results based upon the direct and adjoint variable approaches will be presented. The second example is a nonlinear transient heat transfer problem solved by a p-version time-discontinuous Galerkin's Method. The resulting matrix equation of the state equation is simply in the form of Ax = b, representing a single step, time marching scheme. A direct differentiation approach will be used to compute the thermal sensitivities of a sample 2D problem.
Sensitivity analyses for sparse-data problems-using weakly informative bayesian priors.
Hamra, Ghassan B; MacLehose, Richard F; Cole, Stephen R
2013-03-01
Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist.
Sensitivity Analyses for Sparse-Data Problems—Using Weakly Informative Bayesian Priors
Hamra, Ghassan B.; MacLehose, Richard F.; Cole, Stephen R.
2013-01-01
Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist. PMID:23337241
Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Mao, Lei; Jackson, Lisa
2016-10-01
In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.
Laser ablation surface-enhanced Raman microspectroscopy.
Londero, Pablo S; Lombardi, John R; Leona, Marco
2013-06-04
Improved identification of trace organic compounds in complex matrixes is critical for a variety of fields such as material science, heritage science, and forensics. Surface-enhanced Raman scattering (SERS) is a vibrational spectroscopy technique that can attain single-molecule sensitivity and has been shown to complement mass spectrometry, but lacks widespread application without a robust method that utilizes the effect. We demonstrate a new, highly sensitive, and widely applicable approach to SERS analysis based on laser ablation in the presence of a tailored plasmonic substrate. We analyze several challenging compounds, including non-water-soluble pigments and dyed leather from an ancient Egyptian chariot, achieving sensitivity as high as 120 amol for a 1:1 signal-to-noise ratio and 5 μm spatial resolution. This represents orders of magnitude improvement in spatial resolution and sensitivity compared to those of other SERS approaches intended for widespread application, greatly increasing the applicability of SERS.
Mallinckrodt, C H; Lin, Q; Molenberghs, M
2013-01-01
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.
Prevailing methodologies in the analysis of gene expression data often neglect to incorporate full concentration and time response due to limitations in throughput and sensitivity with traditional microarray approaches. We have developed a high throughput assay suite using primar...
Hoyer, Annika; Kuss, Oliver
2018-05-01
Meta-analysis of diagnostic studies is still a rapidly developing area of biostatistical research. Especially, there is an increasing interest in methods to compare different diagnostic tests to a common gold standard. Restricting to the case of two diagnostic tests, in these meta-analyses the parameters of interest are the differences of sensitivities and specificities (with their corresponding confidence intervals) between the two diagnostic tests while accounting for the various associations across single studies and between the two tests. We propose statistical models with a quadrivariate response (where sensitivity of test 1, specificity of test 1, sensitivity of test 2, and specificity of test 2 are the four responses) as a sensible approach to this task. Using a quadrivariate generalized linear mixed model naturally generalizes the common standard bivariate model of meta-analysis for a single diagnostic test. If information on several thresholds of the tests is available, the quadrivariate model can be further generalized to yield a comparison of full receiver operating characteristic (ROC) curves. We illustrate our model by an example where two screening methods for the diagnosis of type 2 diabetes are compared.
Exploratory Modeling and the use of Simulation for Policy Analysis
1992-01-01
and the Use of Simulation for Policy Analysis Steven C. Barikes Prepared for the United States Army R A N D Approved for public release; distribution...Research, Vol. 39, No. 3, May-June 1991, pp. 355-365. Lipton, Richard J ., Thomas G. Marr, and J . Douglas Welsh, "Computational Approaches to Discovering...the Visual Cortex, John Wiley & Sons, New York, 1985. / -30- Rothenberg, J ., N. Z. Shapiro, and C. Hefley, "A Propagative’ Approach to Sensitivity
Evaluation of various modelling approaches in flood routing simulation and flood area mapping
NASA Astrophysics Data System (ADS)
Papaioannou, George; Loukas, Athanasios; Vasiliades, Lampros; Aronica, Giuseppe
2016-04-01
An essential process of flood hazard analysis and mapping is the floodplain modelling. The selection of the modelling approach, especially, in complex riverine topographies such as urban and suburban areas, and ungauged watersheds may affect the accuracy of the outcomes in terms of flood depths and flood inundation area. In this study, a sensitivity analysis implemented using several hydraulic-hydrodynamic modelling approaches (1D, 2D, 1D/2D) and the effect of modelling approach on flood modelling and flood mapping was investigated. The digital terrain model (DTMs) used in this study was generated from Terrestrial Laser Scanning (TLS) point cloud data. The modelling approaches included 1-dimensional hydraulic-hydrodynamic models (1D), 2-dimensional hydraulic-hydrodynamic models (2D) and the coupled 1D/2D. The 1D hydraulic-hydrodynamic models used were: HECRAS, MIKE11, LISFLOOD, XPSTORM. The 2D hydraulic-hydrodynamic models used were: MIKE21, MIKE21FM, HECRAS (2D), XPSTORM, LISFLOOD and FLO2d. The coupled 1D/2D models employed were: HECRAS(1D/2D), MIKE11/MIKE21(MIKE FLOOD platform), MIKE11/MIKE21 FM(MIKE FLOOD platform), XPSTORM(1D/2D). The validation process of flood extent achieved with the use of 2x2 contingency tables between simulated and observed flooded area for an extreme historical flash flood event. The skill score Critical Success Index was used in the validation process. The modelling approaches have also been evaluated for simulation time and requested computing power. The methodology has been implemented in a suburban ungauged watershed of Xerias river at Volos-Greece. The results of the analysis indicate the necessity of sensitivity analysis application with the use of different hydraulic-hydrodynamic modelling approaches especially for areas with complex terrain.
Relevant Feature Set Estimation with a Knock-out Strategy and Random Forests
Ganz, Melanie; Greve, Douglas N.; Fischl, Bruce; Konukoglu, Ender
2015-01-01
Group analysis of neuroimaging data is a vital tool for identifying anatomical and functional variations related to diseases as well as normal biological processes. The analyses are often performed on a large number of highly correlated measurements using a relatively smaller number of samples. Despite the correlation structure, the most widely used approach is to analyze the data using univariate methods followed by post-hoc corrections that try to account for the data’s multivariate nature. Although widely used, this approach may fail to recover from the adverse effects of the initial analysis when local effects are not strong. Multivariate pattern analysis (MVPA) is a powerful alternative to the univariate approach for identifying relevant variations. Jointly analyzing all the measures, MVPA techniques can detect global effects even when individual local effects are too weak to detect with univariate analysis. Current approaches are successful in identifying variations that yield highly predictive and compact models. However, they suffer from lessened sensitivity and instabilities in identification of relevant variations. Furthermore, current methods’ user-defined parameters are often unintuitive and difficult to determine. In this article, we propose a novel MVPA method for group analysis of high-dimensional data that overcomes the drawbacks of the current techniques. Our approach explicitly aims to identify all relevant variations using a “knock-out” strategy and the Random Forest algorithm. In evaluations with synthetic datasets the proposed method achieved substantially higher sensitivity and accuracy than the state-of-the-art MVPA methods, and outperformed the univariate approach when the effect size is low. In experiments with real datasets the proposed method identified regions beyond the univariate approach, while other MVPA methods failed to replicate the univariate results. More importantly, in a reproducibility study with the well-known ADNI dataset the proposed method yielded higher stability and power than the univariate approach. PMID:26272728
Wei, Binnian; McGuffey, James E; Blount, Benjamin C; Wang, Lanqing
2016-01-01
Maternal exposure to marijuana during the lactation period-either active or passive-has prompted concerns about transmission of cannabinoids to breastfed infants and possible subsequent adverse health consequences. Assessing these health risks requires a sensitive analytical approach that is able to quantitatively measure trace-level cannabinoids in breast milk. Here, we describe a saponification-solid phase extraction approach combined with ultra-high-pressure liquid chromatography-tandem mass spectrometry for simultaneously quantifying Δ9-tetrahydrocannabinol (THC), cannabidiol (CBD), and cannabinol (CBN) in breast milk. We demonstrate for the first time that constraints on sensitivity can be overcome by utilizing alkaline saponification of the milk samples. After extensively optimizing the saponification procedure, the validated method exhibited limits of detections of 13, 4, and 66 pg/mL for THC, CBN, and CBD, respectively. Notably, the sensitivity achieved was significantly improved, for instance, the limits of detection for THC is at least 100-fold more sensitive compared to that previously reported in the literature. This is essential for monitoring cannabinoids in breast milk resulting from passive or nonrecent active maternal exposure. Furthermore, we simultaneously acquired multiple reaction monitoring transitions for 12 C- and 13 C-analyte isotopes. This combined analysis largely facilitated data acquisition by reducing the repetitive analysis rate for samples exceeding the linear limits of 12 C-analytes. In addition to high sensitivity and broad quantitation range, this method delivers excellent accuracy (relative error within ±10%), precision (relative standard deviation <10%), and efficient analysis. In future studies, we expect this method to play a critical role in assessing infant exposure to cannabinoids through breastfeeding.
Cifuentes, Ricardo A; Murillo-Rojas, Juan; Avella-Vargas, Esperanza
2016-03-03
In the search to prevent hemorrhages associated with anticoagulant therapy, a major goal is to validate predictors of sensitivity to warfarin. However, previous studies in Colombia that included polymorphisms in the VKORC1 and CYP2C9 genes as predictors reported different algorithm performances to explain dose variations, and did not evaluate the prediction of sensitivity to warfarin. To determine the accuracy of the pharmacogenetic analysis, which includes the CYP2C9 *2 and *3 and VKORC1 1639G>A polymorphisms in predicting patients' sensitivity to warfarin at the Hospital Militar Central, a reference center for patients born in different parts of Colombia. Demographic and clinical data were obtained from 130 patients with stable doses of warfarin for more than two months. Next, their genotypes were obtained through a melting curve analysis. After verifying the Hardy-Weinberg equilibrium of the genotypes from the polymorphisms, a statistical analysis was done, which included multivariate and predictive approaches. A pharmacogenetic model that explained 52.8% of dose variation (p<0.001) was built, which was only 4% above the performance resulting from the same data using the International Warfarin Pharmacogenetics Consortium algorithm. The model predicting the sensitivity achieved an accuracy of 77.8% and included age (p=0.003), polymorphisms *2 and *3 (p=0.002) and polymorphism 1639G>A (p<0.001) as predictors. These results in a mixed population support the prediction of sensitivity to warfarin based on polymorphisms in VKORC1 and CYP2C9 as a valid approach in Colombian patients.
Graves, Gabrielle S; Adam, Murtaza K; Stepien, Kimberly E; Han, Dennis P
2014-08-01
To evaluate sensitivity, specificity and reproducibility of colour difference plot analysis (CDPA) of 103 hexagon multifocal electroretinogram (mfERG) in detecting established hydroxychloroquine (HCQ) retinal toxicity. Twenty-three patients taking HCQ were divided into those with and without retinal toxicity and were compared with a control group without retinal disease and not taking HCQ. CDPA with two masked examiners was performed using age-corrected mfERG responses in the central ring (Rc ; 0-5.5 degrees from fixation) and paracentral ring (Rp ; 5.5-11 degrees from fixation). An abnormal ring was defined as containing any hexagons with a difference in two or more standard deviations from normal (colour blue or black). Categorical analysis (ring involvement or not) showed Rc had 83% sensitivity and 93% specificity. Rp had 89% sensitivity and 82% specificity. Requiring abnormal hexagons in both Rc and Rp yielded sensitivity and specificity of 83% and 95%, respectively. If required in only one ring, they were 89% and 80%, respectively. In this population, there was complete agreement in identifying toxicity when comparing CDPA using Rp with ring ratio analysis using R5/R4 P1 ring responses (89% sensitivity and 95% specificity). Continuous analysis of CDPA with receiver operating characteristic analysis showed optimized detection (83% sensitivity and 96% specificity) when ≥4 abnormal hexagons were present anywhere within the Rp ring outline. Intergrader agreement and reproducibility were good. Colour difference plot analysis had sensitivity and specificity that approached that of ring ratio analysis of R5/R4 P₁ responses. Ease of implementation and reproducibility are notable advantages of CDPA. © 2014 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Picard, Richard Roy; Bhat, Kabekode Ghanasham
2017-07-18
We examine sensitivity analysis and uncertainty quantification for molecular dynamics simulation. Extreme (large or small) output values for the LAMMPS code often occur at the boundaries of input regions, and uncertainties in those boundary values are overlooked by common SA methods. Similarly, input values for which code outputs are consistent with calibration data can also occur near boundaries. Upon applying approaches in the literature for imprecise probabilities (IPs), much more realistic results are obtained than for the complacent application of standard SA and code calibration.
NASA Technical Reports Server (NTRS)
McGhee, David S.; Peck, Jeff A.; McDonald, Emmett J.
2012-01-01
This paper examines Probabilistic Sensitivity Analysis (PSA) methods and tools in an effort to understand their utility in vehicle loads and dynamic analysis. Specifically, this study addresses how these methods may be used to establish limits on payload mass and cg location and requirements on adaptor stiffnesses while maintaining vehicle loads and frequencies within established bounds. To this end, PSA methods and tools are applied to a realistic, but manageable, integrated launch vehicle analysis where payload and payload adaptor parameters are modeled as random variables. This analysis is used to study both Regional Response PSA (RRPSA) and Global Response PSA (GRPSA) methods, with a primary focus on sampling based techniques. For contrast, some MPP based approaches are also examined.
General methods for sensitivity analysis of equilibrium dynamics in patch occupancy models
Miller, David A.W.
2012-01-01
Sensitivity analysis is a useful tool for the study of ecological models that has many potential applications for patch occupancy modeling. Drawing from the rich foundation of existing methods for Markov chain models, I demonstrate new methods for sensitivity analysis of the equilibrium state dynamics of occupancy models. Estimates from three previous studies are used to illustrate the utility of the sensitivity calculations: a joint occupancy model for a prey species, its predators, and habitat used by both; occurrence dynamics from a well-known metapopulation study of three butterfly species; and Golden Eagle occupancy and reproductive dynamics. I show how to deal efficiently with multistate models and how to calculate sensitivities involving derived state variables and lower-level parameters. In addition, I extend methods to incorporate environmental variation by allowing for spatial and temporal variability in transition probabilities. The approach used here is concise and general and can fully account for environmental variability in transition parameters. The methods can be used to improve inferences in occupancy studies by quantifying the effects of underlying parameters, aiding prediction of future system states, and identifying priorities for sampling effort.
Sensitivity of control-augmented structure obtained by a system decomposition method
NASA Technical Reports Server (NTRS)
Sobieszczanskisobieski, Jaroslaw; Bloebaum, Christina L.; Hajela, Prabhat
1988-01-01
The verification of a method for computing sensitivity derivatives of a coupled system is presented. The method deals with a system whose analysis can be partitioned into subsets that correspond to disciplines and/or physical subsystems that exchange input-output data with each other. The method uses the partial sensitivity derivatives of the output with respect to input obtained for each subset separately to assemble a set of linear, simultaneous, algebraic equations that are solved for the derivatives of the coupled system response. This sensitivity analysis is verified using an example of a cantilever beam augmented with an active control system to limit the beam's dynamic displacements under an excitation force. The verification shows good agreement of the method with reference data obtained by a finite difference technique involving entire system analysis. The usefulness of a system sensitivity method in optimization applications by employing a piecewise-linear approach to the same numerical example is demonstrated. The method's principal merits are its intrinsically superior accuracy in comparison with the finite difference technique, and its compatibility with the traditional division of work in complex engineering tasks among specialty groups.
Examining the accuracy of the infinite order sudden approximation using sensitivity analysis
NASA Astrophysics Data System (ADS)
Eno, Larry; Rabitz, Herschel
1981-08-01
A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix SIOS with respect to a parameter which reintroduces the internal energy operator ?0 into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (?0 in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result is obtained for the effect of ?0 on SIOS. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H2 system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed.
Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat
2016-01-01
The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.
Zur, RM; Roy, LM; Ito, S; Beyene, J; Carew, C; Ungar, WJ
2016-01-01
Thiopurine S-methyltransferase (TPMT) deficiency increases the risk of serious adverse events in persons receiving thiopurines. The objective was to synthesize reported sensitivity and specificity of TPMT phenotyping and genotyping using a latent class hierarchical summary receiver operating characteristic meta-analysis. In 27 studies, pooled sensitivity and specificity of phenotyping for deficient individuals was 75.9% (95% credible interval (CrI), 58.3–87.0%) and 98.9% (96.3–100%), respectively. For genotype tests evaluating TPMT*2 and TPMT*3, sensitivity and specificity was 90.4% (79.1–99.4%) and 100.0% (99.9–100%), respectively. For individuals with deficient or intermediate activity, phenotype sensitivity and specificity was 91.3% (86.4–95.5%) and 92.6% (86.5–96.6%), respectively. For genotype tests evaluating TPMT*2 and TPMT*3, sensitivity and specificity was 88.9% (81.6–97.5%) and 99.2% (98.4–99.9%), respectively. Genotyping has higher sensitivity as long as TPMT*2 and TPMT*3 are tested. Both approaches display high specificity. Latent class meta-analysis is a useful method for synthesizing diagnostic test performance data for clinical practice guidelines. PMID:27217052
Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls
NASA Astrophysics Data System (ADS)
Guha Ray, A.; Baidya, D. K.
2012-09-01
Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.
Simoneau, Gabrielle; Levis, Brooke; Cuijpers, Pim; Ioannidis, John P A; Patten, Scott B; Shrier, Ian; Bombardier, Charles H; de Lima Osório, Flavia; Fann, Jesse R; Gjerdingen, Dwenda; Lamers, Femke; Lotrakul, Manote; Löwe, Bernd; Shaaban, Juwita; Stafford, Lesley; van Weert, Henk C P M; Whooley, Mary A; Wittkampf, Karin A; Yeung, Albert S; Thombs, Brett D; Benedetti, Andrea
2017-11-01
Individual patient data (IPD) meta-analyses are increasingly common in the literature. In the context of estimating the diagnostic accuracy of ordinal or semi-continuous scale tests, sensitivity and specificity are often reported for a given threshold or a small set of thresholds, and a meta-analysis is conducted via a bivariate approach to account for their correlation. When IPD are available, sensitivity and specificity can be pooled for every possible threshold. Our objective was to compare the bivariate approach, which can be applied separately at every threshold, to two multivariate methods: the ordinal multivariate random-effects model and the Poisson correlated gamma-frailty model. Our comparison was empirical, using IPD from 13 studies that evaluated the diagnostic accuracy of the 9-item Patient Health Questionnaire depression screening tool, and included simulations. The empirical comparison showed that the implementation of the two multivariate methods is more laborious in terms of computational time and sensitivity to user-supplied values compared to the bivariate approach. Simulations showed that ignoring the within-study correlation of sensitivity and specificity across thresholds did not worsen inferences with the bivariate approach compared to the Poisson model. The ordinal approach was not suitable for simulations because the model was highly sensitive to user-supplied starting values. We tentatively recommend the bivariate approach rather than more complex multivariate methods for IPD diagnostic accuracy meta-analyses of ordinal scale tests, although the limited type of diagnostic data considered in the simulation study restricts the generalization of our findings. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Skeletal Mechanism Generation of Surrogate Jet Fuels for Aeropropulsion Modeling
NASA Astrophysics Data System (ADS)
Sung, Chih-Jen; Niemeyer, Kyle E.
2010-05-01
A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with skeletal reductions of two important hydrocarbon components, n-heptane and n-decane, relevant to surrogate jet fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each previous method, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal.
Quantitative mass spectrometry methods for pharmaceutical analysis
Loos, Glenn; Van Schepdael, Ann
2016-01-01
Quantitative pharmaceutical analysis is nowadays frequently executed using mass spectrometry. Electrospray ionization coupled to a (hybrid) triple quadrupole mass spectrometer is generally used in combination with solid-phase extraction and liquid chromatography. Furthermore, isotopically labelled standards are often used to correct for ion suppression. The challenges in producing sensitive but reliable quantitative data depend on the instrumentation, sample preparation and hyphenated techniques. In this contribution, different approaches to enhance the ionization efficiencies using modified source geometries and improved ion guidance are provided. Furthermore, possibilities to minimize, assess and correct for matrix interferences caused by co-eluting substances are described. With the focus on pharmaceuticals in the environment and bioanalysis, different separation techniques, trends in liquid chromatography and sample preparation methods to minimize matrix effects and increase sensitivity are discussed. Although highly sensitive methods are generally aimed for to provide automated multi-residue analysis, (less sensitive) miniaturized set-ups have a great potential due to their ability for in-field usage. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644982
Briggs, Andrew H; Ades, A E; Price, Martin J
2003-01-01
In structuring decision models of medical interventions, it is commonly recommended that only 2 branches be used for each chance node to avoid logical inconsistencies that can arise during sensitivity analyses if the branching probabilities do not sum to 1. However, information may be naturally available in an unconditional form, and structuring a tree in conditional form may complicate rather than simplify the sensitivity analysis of the unconditional probabilities. Current guidance emphasizes using probabilistic sensitivity analysis, and a method is required to provide probabilistic probabilities over multiple branches that appropriately represents uncertainty while satisfying the requirement that mutually exclusive event probabilities should sum to 1. The authors argue that the Dirichlet distribution, the multivariate equivalent of the beta distribution, is appropriate for this purpose and illustrate its use for generating a fully probabilistic transition matrix for a Markov model. Furthermore, they demonstrate that by adopting a Bayesian approach, the problem of observing zero counts for transitions of interest can be overcome.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten
2016-06-08
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less
NASA Astrophysics Data System (ADS)
Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang
2016-06-01
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.
NASA Astrophysics Data System (ADS)
Newman, James Charles, III
1997-10-01
The first two steps in the development of an integrated multidisciplinary design optimization procedure capable of analyzing the nonlinear fluid flow about geometrically complex aeroelastic configurations have been accomplished in the present work. For the first step, a three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed. The advantage of unstructured grids, when compared with a structured-grid approach, is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the time-dependent, nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional cases and a Gauss-Seidel algorithm for the three-dimensional; at steady-state, similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Various surface parameterization techniques have been employed in the current study to control the shape of the design surface. Once this surface has been deformed, the interior volume of the unstructured grid is adapted by considering the mesh as a system of interconnected tension springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR, an advanced automatic-differentiation software tool. To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for several two- and three-dimensional cases. In twodimensions, an initially symmetric NACA-0012 airfoil and a high-lift multielement airfoil were examined. For the three-dimensional configurations, an initially rectangular wing with uniform NACA-0012 cross-sections was optimized; in addition, a complete Boeing 747-200 aircraft was studied. Furthermore, the current study also examines the effect of inconsistency in the order of spatial accuracy between the nonlinear fluid and linear shape sensitivity equations. The second step was to develop a computationally efficient, high-fidelity, integrated static aeroelastic analysis procedure. To accomplish this, a structural analysis code was coupled with the aforementioned unstructured grid aerodynamic analysis solver. The use of an unstructured grid scheme for the aerodynamic analysis enhances the interaction compatibility with the wing structure. The structural analysis utilizes finite elements to model the wing so that accurate structural deflections may be obtained. In the current work, parameters have been introduced to control the interaction of the computational fluid dynamics and structural analyses; these control parameters permit extremely efficient static aeroelastic computations. To demonstrate and evaluate this procedure, static aeroelastic analysis results for a flexible wing in low subsonic, high subsonic (subcritical), transonic (supercritical), and supersonic flow conditions are presented.
Improvement of the cost-benefit analysis algorithm for high-rise construction projects
NASA Astrophysics Data System (ADS)
Gafurov, Andrey; Skotarenko, Oksana; Plotnikov, Vladimir
2018-03-01
The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the "Project analysis scenario" flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.
Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw
2017-01-01
Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare . However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop plants with large and complex genomes.
Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw
2017-01-01
Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare. However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop plants with large and complex genomes. PMID:29250096
Vandeweghe, Laura; Vervoort, Leentje; Verbeken, Sandra; Moens, Ellen; Braet, Caroline
2016-01-01
It has recently been suggested that individual differences in Reward Sensitivity and Punishment Sensitivity may determine how children respond to food. These temperamental traits reflect activity in two basic brain systems that respond to rewarding and punishing stimuli, respectively, with approach and avoidance. Via parent-report questionnaires, we investigate the associations of the general motivational temperamental traits Reward Sensitivity and Punishment Sensitivity with Food Approach and Food Avoidance in 98 preschool children. Consistent with the conceptualization of Reward Sensitivity in terms of approach behavior and Punishment Sensitivity in terms of avoidance behavior, Reward Sensitivity was positively related to Food Approach, while Punishment Sensitivity was positively related to Food Avoidance. Future research should integrate these perspectives (i.e., general temperamental traits Reward Sensitivity and Punishment Sensitivity, and Food Approach and Avoidance) to get a better understanding of eating behavior and related body weight. PMID:27445898
Spatially resolved δ13C analysis using laser ablation isotope ratio mass spectrometry
NASA Astrophysics Data System (ADS)
Moran, J.; Riha, K. M.; Nims, M. K.; Linley, T. J.; Hess, N. J.; Nico, P. S.
2014-12-01
Inherent geochemical, organic matter, and microbial heterogeneity over small spatial scales can complicate studies of carbon dynamics through soils. Stable isotope analysis has a strong history of helping track substrate turnover, delineate rhizosphere activity zones, and identifying transitions in vegetation cover, but most traditional isotope approaches are limited in spatial resolution by a combination of physical separation techniques (manual dissection) and IRMS instrument sensitivity. We coupled laser ablation sampling with isotope measurement via IRMS to enable spatially resolved analysis over solid surfaces. Once a targeted sample region is ablated the resulting particulates are entrained in a helium carrier gas and passed through a combustion reactor where carbon is converted to CO2. Cyrotrapping of the resulting CO2 enables a reduction in carrier gas flow which improves overall measurement sensitivity versus traditional, high flow sample introduction. Currently we are performing sample analysis at 50 μm resolution, require 65 ng C per analysis, and achieve measurement precision consistent with other continuous flow techniques. We will discuss applications of the laser ablation IRMS (LA-IRMS) system to microbial communities and fish ecology studies to demonstrate the merits of this technique and how similar analytical approaches can be transitioned to soil systems. Preliminary efforts at analyzing soil samples will be used to highlight strengths and limitations of the LA-IRMS approach, paying particular attention to sample preparation requirements, spatial resolution, sample analysis time, and the types of questions most conducive to analysis via LA-IRMS.
NASA Astrophysics Data System (ADS)
Önal, Orkun; Ozmenci, Cemre; Canadinc, Demircan
2014-09-01
A multi-scale modeling approach was applied to predict the impact response of a strain rate sensitive high-manganese austenitic steel. The roles of texture, geometry and strain rate sensitivity were successfully taken into account all at once by coupling crystal plasticity and finite element (FE) analysis. Specifically, crystal plasticity was utilized to obtain the multi-axial flow rule at different strain rates based on the experimental deformation response under uniaxial tensile loading. The equivalent stress - equivalent strain response was then incorporated into the FE model for the sake of a more representative hardening rule under impact loading. The current results demonstrate that reliable predictions can be obtained by proper coupling of crystal plasticity and FE analysis even if the experimental flow rule of the material is acquired under uniaxial loading and at moderate strain rates that are significantly slower than those attained during impact loading. Furthermore, the current findings also demonstrate the need for an experiment-based multi-scale modeling approach for the sake of reliable predictions of the impact response.
Automated diagnosis of Alzheimer's disease with multi-atlas based whole brain segmentations
NASA Astrophysics Data System (ADS)
Luo, Yuan; Tang, Xiaoying
2017-03-01
Voxel-based analysis is widely used in quantitative analysis of structural brain magnetic resonance imaging (MRI) and automated disease detection, such as Alzheimer's disease (AD). However, noise at the voxel level may cause low sensitivity to AD-induced structural abnormalities. This can be addressed with the use of a whole brain structural segmentation approach which greatly reduces the dimension of features (the number of voxels). In this paper, we propose an automatic AD diagnosis system that combines such whole brain segmen- tations with advanced machine learning methods. We used a multi-atlas segmentation technique to parcellate T1-weighted images into 54 distinct brain regions and extract their structural volumes to serve as the features for principal-component-analysis-based dimension reduction and support-vector-machine-based classification. The relationship between the number of retained principal components (PCs) and the diagnosis accuracy was systematically evaluated, in a leave-one-out fashion, based on 28 AD subjects and 23 age-matched healthy subjects. Our approach yielded pretty good classification results with 96.08% overall accuracy being achieved using the three foremost PCs. In addition, our approach yielded 96.43% specificity, 100% sensitivity, and 0.9891 area under the receiver operating characteristic curve.
Burgess, Stephen; Zuber, Verena; Valdes-Marquez, Elsa; Sun, Benjamin B; Hopewell, Jemma C
2017-12-01
Mendelian randomization uses genetic variants to make causal inferences about the effect of a risk factor on an outcome. With fine-mapped genetic data, there may be hundreds of genetic variants in a single gene region any of which could be used to assess this causal relationship. However, using too many genetic variants in the analysis can lead to spurious estimates and inflated Type 1 error rates. But if only a few genetic variants are used, then the majority of the data is ignored and estimates are highly sensitive to the particular choice of variants. We propose an approach based on summarized data only (genetic association and correlation estimates) that uses principal components analysis to form instruments. This approach has desirable theoretical properties: it takes the totality of data into account and does not suffer from numerical instabilities. It also has good properties in simulation studies: it is not particularly sensitive to varying the genetic variants included in the analysis or the genetic correlation matrix, and it does not have greatly inflated Type 1 error rates. Overall, the method gives estimates that are less precise than those from variable selection approaches (such as using a conditional analysis or pruning approach to select variants), but are more robust to seemingly arbitrary choices in the variable selection step. Methods are illustrated by an example using genetic associations with testosterone for 320 genetic variants to assess the effect of sex hormone related pathways on coronary artery disease risk, in which variable selection approaches give inconsistent inferences. © 2017 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc.
Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H
2017-03-01
To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.
Peiris, Ramila H; Ignagni, Nicholas; Budman, Hector; Moresoli, Christine; Legge, Raymond L
2012-09-15
Characterization of the interactions between natural colloidal/particulate- and protein-like matter is important for understanding their contribution to different physiochemical phenomena like membrane fouling, adsorption of bacteria onto surfaces and various applications of nanoparticles in nanomedicine and nanotoxicology. Precise interpretation of the extent of such interactions is however hindered due to the limitations of most characterization methods to allow rapid, sensitive and accurate measurements. Here we report on a fluorescence-based excitation-emission matrix (EEM) approach in combination with principal component analysis (PCA) to extract information related to the interaction between natural colloidal/particulate- and protein-like matter. Surface plasmon resonance (SPR) analysis and fiber-optic probe based surface fluorescence measurements were used to confirm that the proposed approach can be used to characterize colloidal/particulate-protein interactions at the physical level. This method has potential to be a fundamental measurement of these interactions with the advantage that it can be performed rapidly and with high sensitivity. Copyright © 2012 Elsevier B.V. All rights reserved.
Bibi, Aisha; Ju, Huangxian
2016-04-01
A quantum dots (QDs) assisted laser desorption/ionization mass spectrometric (QDA-LDI-MS) strategy was proposed for qualitative and quantitative analysis of a series of carbohydrates. The adsorption of carbohydrates on the modified surface of different QDs as the matrices depended mainly on the formation of hydrogen bonding, which led to higher MS intensity than those with conventional organic matrix. The effects of QDs concentration and sample preparation method were explored for improving the selective ionization process and the detection sensitivity. The proposed approach offered a new dimension to the application of QDs as matrices for MALDI-MS research of carbohydrates. It could be used for quantitative measurement of glucose concentration in human serum with good performance. The QDs served as a matrix showed the advantages of low background, higher sensitivity, convenient sample preparation and excellent stability under vacuum. The QDs assisted LDI-MS approach has promising application to the analysis of carbohydrates in complex biological samples. Copyright © 2016 John Wiley & Sons, Ltd.
Approaches to answering critical CER questions.
Kinnier, Christine V; Chung, Jeanette W; Bilimoria, Karl Y
2015-01-01
While randomized controlled trials (RCTs) are the gold standard for research, many research questions cannot be ethically and practically answered using an RCT. Comparative effectiveness research (CER) techniques are often better suited than RCTs to address the effects of an intervention under routine care conditions, an outcome otherwise known as effectiveness. CER research techniques covered in this section include: effectiveness-oriented experimental studies such as pragmatic trials and cluster randomized trials, treatment response heterogeneity, observational and database studies including adjustment techniques such as sensitivity analysis and propensity score analysis, systematic reviews and meta-analysis, decision analysis, and cost effectiveness analysis. Each section describes the technique and covers the strengths and weaknesses of the approach.
To explore the potential of exhaled breath analysis by Column Chromatography-Mass Spectrometry (GC-MS) as a non invasive and sensitive approach to evaluate mesenteric ischemia in pigs.
Domestic pigs (n=3) were anesthetized with Guaifenesin/ Fentanyl/ Ketamine/ Xylazine...
Publication Bias in Research Synthesis: Sensitivity Analysis Using A Priori Weight Functions
ERIC Educational Resources Information Center
Vevea, Jack L.; Woods, Carol M.
2005-01-01
Publication bias, sometimes known as the "file-drawer problem" or "funnel-plot asymmetry," is common in empirical research. The authors review the implications of publication bias for quantitative research synthesis (meta-analysis) and describe existing techniques for detecting and correcting it. A new approach is proposed that is suitable for…
Sensitivity Analysis of Multiple Informant Models When Data are Not Missing at Random
Blozis, Shelley A.; Ge, Xiaojia; Xu, Shu; Natsuaki, Misaki N.; Shaw, Daniel S.; Neiderhiser, Jenae; Scaramella, Laura; Leve, Leslie; Reiss, David
2014-01-01
Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups may be retained even if only one member of a group contributes data. Statistical inference is based on the assumption that data are missing completely at random or missing at random. Importantly, whether or not data are missing is assumed to be independent of the missing data. A saturated correlates model that incorporates correlates of the missingness or the missing data into an analysis and multiple imputation that may also use such correlates offer advantages over the standard implementation of SEM when data are not missing at random because these approaches may result in a data analysis problem for which the missingness is ignorable. This paper considers these approaches in an analysis of family data to assess the sensitivity of parameter estimates to assumptions about missing data, a strategy that may be easily implemented using SEM software. PMID:25221420
Liu, Richard T; Burke, Taylor A; Abramson, Lyn Y; Alloy, Lauren B
2017-11-04
Behavioral Approach System (BAS) sensitivity has been implicated in the development of a variety of different psychiatric disorders. Prominent among these in the empirical literature are bipolar spectrum disorders (BSDs). Given that adolescence represents a critical developmental stage of risk for the onset of BSDs, it is important to clarify the latent structure of BAS sensitivity in this period of development. A statistical approach especially well-suited for delineating the latent structure of BAS sensitivity is taxometric analysis, which is designed to evaluate whether the latent structure of a construct is taxonic (i.e., categorical) or dimensional (i.e., continuous) in nature. The current study applied three mathematically non-redundant taxometric procedures (i.e., MAMBAC, MAXEIG, and L-Mode) to a large community sample of adolescents (n = 12,494) who completed two separate measures of BAS sensitivity: the BIS/BAS Scales Carver and White (Journal of Personality and Social Psychology, 67, 319-333. 1994) and the Sensitivity to Reward and Sensitivity to Punishment Questionnaire (Torrubia et al. Personality and Individual Differences, 31, 837-862. 2001). Given the significant developmental changes in reward sensitivity that occur across adolescence, the current investigation aimed to provide a fine-grained evaluation of the data by performing taxometric analyses at an age-by-age level (14-19 years; n for each age ≥ 883). Results derived from taxometric procedures, across all ages tested, were highly consistent, providing strong evidence that BAS sensitivity is best conceptualized as dimensional in nature. Thus, the findings suggest that BAS-related vulnerability to BSDs exists along a continuum of severity, with no natural cut-point qualitatively differentiating high- and low-risk adolescents. Clinical and research implications for the assessment of BSD-related vulnerability are discussed.
Andronis, L; Barton, P; Bryan, S
2009-06-01
To determine how we define good practice in sensitivity analysis in general and probabilistic sensitivity analysis (PSA) in particular, and to what extent it has been adhered to in the independent economic evaluations undertaken for the National Institute for Health and Clinical Excellence (NICE) over recent years; to establish what policy impact sensitivity analysis has in the context of NICE, and policy-makers' views on sensitivity analysis and uncertainty, and what use is made of sensitivity analysis in policy decision-making. Three major electronic databases, MEDLINE, EMBASE and the NHS Economic Evaluation Database, were searched from inception to February 2008. The meaning of 'good practice' in the broad area of sensitivity analysis was explored through a review of the literature. An audit was undertaken of the 15 most recent NICE multiple technology appraisal judgements and their related reports to assess how sensitivity analysis has been undertaken by independent academic teams for NICE. A review of the policy and guidance documents issued by NICE aimed to assess the policy impact of the sensitivity analysis and the PSA in particular. Qualitative interview data from NICE Technology Appraisal Committee members, collected as part of an earlier study, were also analysed to assess the value attached to the sensitivity analysis components of the economic analyses conducted for NICE. All forms of sensitivity analysis, notably both deterministic and probabilistic approaches, have their supporters and their detractors. Practice in relation to univariate sensitivity analysis is highly variable, with considerable lack of clarity in relation to the methods used and the basis of the ranges employed. In relation to PSA, there is a high level of variability in the form of distribution used for similar parameters, and the justification for such choices is rarely given. Virtually all analyses failed to consider correlations within the PSA, and this is an area of concern. Uncertainty is considered explicitly in the process of arriving at a decision by the NICE Technology Appraisal Committee, and a correlation between high levels of uncertainty and negative decisions was indicated. The findings suggest considerable value in deterministic sensitivity analysis. Such analyses serve to highlight which model parameters are critical to driving a decision. Strong support was expressed for PSA, principally because it provides an indication of the parameter uncertainty around the incremental cost-effectiveness ratio. The review and the policy impact assessment focused exclusively on documentary evidence, excluding other sources that might have revealed further insights on this issue. In seeking to address parameter uncertainty, both deterministic and probabilistic sensitivity analyses should be used. It is evident that some cost-effectiveness work, especially around the sensitivity analysis components, represents a challenge in making it accessible to those making decisions. This speaks to the training agenda for those sitting on such decision-making bodies, and to the importance of clear presentation of analyses by the academic community.
Optimization for minimum sensitivity to uncertain parameters
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw
1994-01-01
A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.
A chemical-genetic approach for functional analysis of plant protein kinases
Salomon, Dor; Bonshtien, Arale
2009-01-01
Plant genomes encode hundreds of protein kinases, yet only for a small fraction of them precise functions and phosphorylation targets have been identified. Recently, we applied a chemical-genetic approach to sensitize the tomato serine/threonine kinase Pto to analogs of PP1, an ATP-competitive and cell-permeable small-molecule inhibitor. The Pto kinase confers resistance to Pst bacteria by activating immune responses upon specific recognition of bacterial effectors. By using PP1 analogs in combination with the analog-sensitive Pto, we shed new light on the role of Pto kinase activity in effector recognition and signal transduction. Here we broaden the use of this chemical-genetic approach to another defense-related plant protein kinase, the MAP kinase LeMPK3. In addition, we show that analog-sensitive but not wild-type kinases are able to use unnatural N6-modified ATP analogs as phosphodonors that can be exploited for tagging direct phosphorylation targets of the kinase of interest. Thus, sensitization of kinases to analogs of the small-molecule inhibitor PP1 and ATP can be an effective tool for the discovery of cellular functions and phosphorylation substrates of plant protein kinases. PMID:19820342
Sensitivity analysis of dynamic biological systems with time-delays.
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2010-10-15
Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex biological systems with time-delays.
Hickinson, D Mark; Marshall, Gayle B; Beran, Garry J; Varella-Garcia, Marileila; Mills, Elizabeth A; South, Marie C; Cassidy, Andrew M; Acheson, Kerry L; McWalter, Gael; McCormack, Rose M; Bunn, Paul A; French, Tim; Graham, Alex; Holloway, Brian R; Hirsch, Fred R; Speake, Georgina
2009-06-01
Potential biomarkers were identified for in vitro sensitivity to the epidermal growth factor receptor (EGFR) tyrosine kinase inhibitor gefitinib in head and neck cancer. Gefitinib sensitivity was determined in cell lines, followed by transcript profiling coupled with a novel pathway analysis approach. Eleven cell lines were highly sensitive to gefitinib (inhibitor concentration required to give 50% growth inhibition [GI(50)] < 1 microM), three had intermediate sensitivity (GI(50) 1-7 microM), and six were resistant (GI(50) > 7 microM); an exploratory principal component analysis revealed a separation between the genomic profiles of sensitive and resistant cell lines. Subsequently, a hypothesis-driven analysis of Affymetrix data (Affymetrix, Inc., Santa Clara, CA, USA) revealed higher mRNA levels for E-cadherin (CDH1); transforming growth factor, alpha (TGF-alpha); amphiregulin (AREG); FLJ22662; EGFR; p21-activated kinase 6 (PAK6); glutathione S-transferase Pi (GSTP1); and ATP-binding cassette, subfamily C, member 5 (ABCC5) in sensitive versus resistant cell lines. A hypothesis-free analysis identified 46 gene transcripts that were strongly differentiated, seven of which had a known association with EGFR and head and neck cancer (human EGF receptor 3 [HER3], TGF-alpha, CDH1, EGFR, keratin 16 [KRT16], fibroblast growth factor 2 [FGF2], and cortactin [CTTN]). Polymerase chain reaction (PCR) and enzyme-linked immunoabsorbant assay analysis confirmed Affymetrix data, and EGFR gene mutation, amplification, and genomic gain correlated strongly with gefitinib sensitivity. We identified biomarkers that predict for in vitro responsiveness to gefitinib, seven of which have known association with EGFR and head and neck cancer. These in vitro predictive biomarkers may have potential utility in the clinic and warrant further investigation.
Recent approaches in sensitive enantioseparations by CE.
Sánchez-Hernández, Laura; Castro-Puyana, María; Marina, María Luisa; Crego, Antonio L
2012-01-01
The latest strategies and instrumental improvements for enhancing the detection sensitivity in chiral analysis by CE are reviewed in this work. Following the previous reviews by García-Ruiz et al. (Electrophoresis 2006, 27, 195-212) and Sánchez-Hernández et al. (Electrophoresis 2008, 29, 237-251; Electrophoresis 2010, 31, 28-43), this review includes those papers that were published during the period from June 2009 to May 2011. These works describe the use of offline and online sample treatment techniques, online sample preconcentration techniques based on electrophoretic principles, and alternative detection systems to UV-Vis to increase the detection sensitivity. The application of the above-mentioned strategies, either alone or combined, to improve the sensitivity in the enantiomeric analysis of a broad range of samples, such as pharmaceutical, biological, food and environmental samples, enables to decrease the limits of detection up to 10⁻¹² M. The use of microchips to achieve sensitive chiral separations is also discussed. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Smith, Andrew; LaVerde, Bruce; Fulcher, Clay; Hunt, Ron
2012-01-01
An approach for predicting the vibration, strain, and force responses of a flight-like vehicle panel assembly to acoustic pressures is presented. Important validation for the approach is provided by comparison to ground test measurements in a reverberant chamber. The test article and the corresponding analytical model were assembled in several configurations to demonstrate the suitability of the approach for response predictions when the vehicle panel is integrated with equipment. Critical choices in the analysis necessary for convergence of the predicted and measured responses are illustrated through sensitivity studies. The methodology includes representation of spatial correlation of the pressure field over the panel surface. Therefore, it is possible to demonstrate the effects of hydrodynamic coincidence in the response. The sensitivity to pressure patch density clearly illustrates the onset of coincidence effects on the panel response predictions.
Trame, MN; Lesko, LJ
2015-01-01
A systems pharmacology model typically integrates pharmacokinetic, biochemical network, and systems biology concepts into a unifying approach. It typically consists of a large number of parameters and reaction species that are interlinked based upon the underlying (patho)physiology and the mechanism of drug action. The more complex these models are, the greater the challenge of reliably identifying and estimating respective model parameters. Global sensitivity analysis provides an innovative tool that can meet this challenge. CPT Pharmacometrics Syst. Pharmacol. (2015) 4, 69–79; doi:10.1002/psp4.6; published online 25 February 2015 PMID:27548289
Robles, A; Ruano, M V; Ribes, J; Seco, A; Ferrer, J
2014-04-01
The results of a global sensitivity analysis of a filtration model for submerged anaerobic MBRs (AnMBRs) are assessed in this paper. This study aimed to (1) identify the less- (or non-) influential factors of the model in order to facilitate model calibration and (2) validate the modelling approach (i.e. to determine the need for each of the proposed factors to be included in the model). The sensitivity analysis was conducted using a revised version of the Morris screening method. The dynamic simulations were conducted using long-term data obtained from an AnMBR plant fitted with industrial-scale hollow-fibre membranes. Of the 14 factors in the model, six were identified as influential, i.e. those calibrated using off-line protocols. A dynamic calibration (based on optimisation algorithms) of these influential factors was conducted. The resulting estimated model factors accurately predicted membrane performance. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bennett, Katrina E.; Urrego Blanco, Jorge R.; Jonko, Alexandra; Bohn, Theodore J.; Atchley, Adam L.; Urban, Nathan M.; Middleton, Richard S.
2018-01-01
The Colorado River Basin is a fundamentally important river for society, ecology, and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent, and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model. We combine global sensitivity analysis with a space-filling Latin Hypercube Sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach. We find that snow-dominated regions are much more sensitive to uncertainties in VIC parameters. Although baseflow and runoff changes respond to parameters used in previous sensitivity studies, we discover new key parameter sensitivities. For instance, changes in runoff and evapotranspiration are sensitive to albedo, while changes in snow water equivalent are sensitive to canopy fraction and Leaf Area Index (LAI) in the VIC model. It is critical for improved modeling to narrow uncertainty in these parameters through improved observations and field studies. This is important because LAI and albedo are anticipated to change under future climate and narrowing uncertainty is paramount to advance our application of models such as VIC for water resource management.
Estimating causal contrasts involving intermediate variables in the presence of selection bias.
Valeri, Linda; Coull, Brent A
2016-11-20
An important goal across the biomedical and social sciences is the quantification of the role of intermediate factors in explaining how an exposure exerts an effect on an outcome. Selection bias has the potential to severely undermine the validity of inferences on direct and indirect causal effects in observational as well as in randomized studies. The phenomenon of selection may arise through several mechanisms, and we here focus on instances of missing data. We study the sign and magnitude of selection bias in the estimates of direct and indirect effects when data on any of the factors involved in the analysis is either missing at random or not missing at random. Under some simplifying assumptions, the bias formulae can lead to nonparametric sensitivity analyses. These sensitivity analyses can be applied to causal effects on the risk difference and risk-ratio scales irrespectively of the estimation approach employed. To incorporate parametric assumptions, we also develop a sensitivity analysis for selection bias in mediation analysis in the spirit of the expectation-maximization algorithm. The approaches are applied to data from a health disparities study investigating the role of stage at diagnosis on racial disparities in colorectal cancer survival. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Bringing gender sensitivity into healthcare practice: a systematic review.
Celik, Halime; Lagro-Janssen, Toine A L M; Widdershoven, Guy G A M; Abma, Tineke A
2011-08-01
Despite the body of literature on gender dimensions and disparities between the sexes in health, practical improvements will not be realized effectively as long as we lack an overview of the ways how to implement these ideas. This systematic review provides a content analysis of literature on the implementation of gender sensitivity in health care. Literature was identified from CINAHL, PsycINFO, Medline, EBSCO and Cochrane (1998-2008) and the reference lists of relevant articles. The quality and relevance of 752 articles were assessed and finally 11 original studies were included. Our results demonstrate that the implementation of gender sensitivity includes tailoring opportunities and barriers related to the professional, organizational and the policy level. As gender disparities are embedded in healthcare, a multiple track approach to implement gender sensitivity is needed to change gendered healthcare systems. Conventional approaches, taking into account one barrier and/or opportunity, fail to prevent gender inequality in health care. For gender-sensitive health care we need to change systems and structures, but also to enhance understanding, raise awareness and develop skills among health professionals. To bring gender sensitivity into healthcare practice, interventions should address a range of factors. Copyright © 2010. Published by Elsevier Ireland Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Haitao, E-mail: liaoht@cae.ac.cn
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less
Adjoint-based sensitivity analysis of low-order thermoacoustic networks using a wave-based approach
NASA Astrophysics Data System (ADS)
Aguilar, José G.; Magri, Luca; Juniper, Matthew P.
2017-07-01
Strict pollutant emission regulations are pushing gas turbine manufacturers to develop devices that operate in lean conditions, with the downside that combustion instabilities are more likely to occur. Methods to predict and control unstable modes inside combustion chambers have been developed in the last decades but, in some cases, they are computationally expensive. Sensitivity analysis aided by adjoint methods provides valuable sensitivity information at a low computational cost. This paper introduces adjoint methods and their application in wave-based low order network models, which are used as industrial tools, to predict and control thermoacoustic oscillations. Two thermoacoustic models of interest are analyzed. First, in the zero Mach number limit, a nonlinear eigenvalue problem is derived, and continuous and discrete adjoint methods are used to obtain the sensitivities of the system to small modifications. Sensitivities to base-state modification and feedback devices are presented. Second, a more general case with non-zero Mach number, a moving flame front and choked outlet, is presented. The influence of the entropy waves on the computed sensitivities is shown.
Wang, Jinlin; Zhou, Xinghua; Xie, Xiaohong; Tang, Qing; Shen, Panxiao; Zeng, Yunxiang
2016-11-17
The most efficient approach to diagnose malignant pleural effusions (MPEs) is still controversial and uncertain. This study aimed to evaluate the utility of a combined approach using ultrasound (US)-guided cutting-needle biopsy (CNB) and standard pleural biopsy (SPB) for diagnosing MPE. Pleural effusions were collected from 172 patients for biochemical and microbiological analyses. US-guided CNB and SPB were performed in the same operation sequentially to obtain specimens for histological analysis. US-guided CNB and SPB procedures provided adequate material for histological analysis in 90.7 and 93.0% of cases, respectively, while a combination of the 2 techniques was in 96.5% of cases. The sensitivity, specificity, positive-predictive value (PPV), negative-predictive value (NPV) and diagnostic accuracy of US-guided CNB versus SPB were: 51.2 vs 63.4%, 100 vs 100%, 100 vs 100%, 64.9 vs 72.2% and 74.4 vs 81.3%, respectively. When CNB was combined with SPB, the corresponding values were 88.6, 100, 100, 88.6 and 93.9%, respectively. Whereas sensitivity, NPV and diagnostic accuracy were not significantly different between CNB and SPB, the combination of CNB and SPB significantly improved the sensitivity, NPV and diagnostic accuracy versus each technique alone (p < 0.05). Significant pain (eight patients), moderate haemoptysis (two patients) and chest wall haematomas (two patients) were observed following CNB, while syncope (four patients) and a slight pneumothorax (four patients) were observed following SPB. Use of a combination of US-guided CNB and SPB afforded a high sensitivity to diagnose MPEs, it is a convenient and safe approach.
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; ...
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.
Reinforcement Sensitivity and Social Anxiety in Combat Veterans
Kimbrel, Nathan A.; Meyer, Eric C.; DeBeer, Bryann B.; Mitchell, John T.; Kimbrel, Azure D.; Nelson-Gray, Rosemery O.; Morissette, Sandra B.
2017-01-01
Objective The present study tested the hypothesis that low behavioral approach system (BAS) sensitivity is associated with social anxiety in combat veterans. Method Self-report measures of reinforcement sensitivity, combat exposure, social interaction anxiety, and social observation anxiety were administered to 197 Iraq/Afghanistan combat veterans. Results As expected, combat exposure, behavioral inhibition system (BIS) sensitivity, and fight-flight-freeze system (FFFS) sensitivity were positively associated with both social interaction anxiety and social observation anxiety. In contrast, BAS sensitivity was negatively associated with social interaction anxiety only. An analysis of the BAS subscales revealed that the Reward Responsiveness subscale was the only BAS subscale associated with social interaction anxiety. BAS-Reward Responsiveness was also associated with social observation anxiety. Conclusion The findings from the present research provide further evidence that low BAS sensitivity may be associated with social anxiety over and above the effects of BIS and FFFS sensitivity. PMID:28966424
Reinforcement Sensitivity and Social Anxiety in Combat Veterans.
Kimbrel, Nathan A; Meyer, Eric C; DeBeer, Bryann B; Mitchell, John T; Kimbrel, Azure D; Nelson-Gray, Rosemery O; Morissette, Sandra B
2016-08-01
The present study tested the hypothesis that low behavioral approach system (BAS) sensitivity is associated with social anxiety in combat veterans. Self-report measures of reinforcement sensitivity, combat exposure, social interaction anxiety, and social observation anxiety were administered to 197 Iraq/Afghanistan combat veterans. As expected, combat exposure, behavioral inhibition system (BIS) sensitivity, and fight-flight-freeze system (FFFS) sensitivity were positively associated with both social interaction anxiety and social observation anxiety. In contrast, BAS sensitivity was negatively associated with social interaction anxiety only. An analysis of the BAS subscales revealed that the Reward Responsiveness subscale was the only BAS subscale associated with social interaction anxiety. BAS-Reward Responsiveness was also associated with social observation anxiety. The findings from the present research provide further evidence that low BAS sensitivity may be associated with social anxiety over and above the effects of BIS and FFFS sensitivity.
Mao, Ningying; Lesher, Beth; Liu, Qifa; Qin, Lei; Chen, Yixi; Gao, Xin; Earnshaw, Stephanie R; McDade, Cheryl L; Charbonneau, Claudie
2016-01-01
Invasive fungal infections (IFIs) require rapid diagnosis and treatment. A decision-analytic model was used to estimate total costs and survival associated with a diagnostic-driven (DD) or an empiric treatment approach in neutropenic patients with hematological malignancies receiving chemotherapy or autologous/allogeneic stem cell transplants in Shanghai, Beijing, Chengdu, and Guangzhou, the People's Republic of China. Treatment initiation for the empiric approach occurred after clinical suspicion of an IFI; treatment initiation for the DD approach occurred after clinical suspicion and a positive IFI diagnostic test result. Model inputs were obtained from the literature; treatment patterns and resource use were based on clinical opinion. Total costs were lower for the DD versus the empiric approach in Shanghai (¥3,232 vs ¥4,331), Beijing (¥3,894 vs ¥4,864), Chengdu, (¥4,632 vs ¥5,795), and Guangzhou (¥8,489 vs ¥9,795). Antifungal administration was lower using the DD (5.7%) than empiric (9.8%) approach, with similar survival rates. Results from one-way and probabilistic sensitivity analyses were most sensitive to changes in diagnostic test sensitivity and IFI incidence; the DD approach dominated the empiric approach in 88% of scenarios. These results suggest that a DD compared to an empiric treatment approach in the People's Republic of China may be cost saving, with similar overall survival in immunocompromised patients with suspected IFIs.
Harvey, Judson W.; Wagner, Brian J.; Bencala, Kenneth E.
1996-01-01
Stream water was locally recharged into shallow groundwater flow paths that returned to the stream (hyporheic exchange) in St. Kevin Gulch, a Rocky Mountain stream in Colorado contaminated by acid mine drainage. Two approaches were used to characterize hyporheic exchange: sub-reach-scale measurement of hydraulic heads and hydraulic conductivity to compute streambed fluxes (hydrometric approach) and reachscale modeling of in-stream solute tracer injections to determine characteristic length and timescales of exchange with storage zones (stream tracer approach). Subsurface data were the standard of comparison used to evaluate the reliability of the stream tracer approach to characterize hyporheic exchange. The reach-averaged hyporheic exchange flux (1.5 mL s−1 m−1), determined by hydrometric methods, was largest when stream base flow was low (10 L s−1); hyporheic exchange persisted when base flow was 10-fold higher, decreasing by approximately 30%. Reliability of the stream tracer approach to detect hyporheic exchange was assessed using first-order uncertainty analysis that considered model parameter sensitivity. The stream tracer approach did not reliably characterize hyporheic exchange at high base flow: the model was apparently more sensitive to exchange with surface water storage zones than with the hyporheic zone. At low base flow the stream tracer approach reliably characterized exchange between the stream and gravel streambed (timescale of hours) but was relatively insensitive to slower exchange with deeper alluvium (timescale of tens of hours) that was detected by subsurface measurements. The stream tracer approach was therefore not equally sensitive to all timescales of hyporheic exchange. We conclude that while the stream tracer approach is an efficient means to characterize surface-subsurface exchange, future studies will need to more routinely consider decreasing sensitivities of tracer methods at higher base flow and a potential bias toward characterizing only a fast component of hyporheic exchange. Stream tracer models with multiple rate constants to consider both fast exchange with streambed gravel and slower exchange with deeper alluvium appear to be warranted.
Examining the accuracy of the infinite order sudden approximation using sensitivity analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eno, L.; Rabitz, H.
1981-08-15
A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix S/sup IOS/ with respect to a parameter which reintroduces the internal energy operator h/sub 0/ into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (h/sub 0/ in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result ismore » obtained for the effect of h/sub 0/ on S/sup IOS/. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H/sub 2/ system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed.« less
Theory of buckling and post-buckling behavior of elastic structures
NASA Technical Reports Server (NTRS)
Budiansky, B.
1974-01-01
The present paper provides a unified, general presentation of the basic theory of the buckling and post-buckling behavior of elastic structures in a form suitable for application to a wide variety of special problems. The notation of functional analysis is used for this purpose. Before the general analysis, simple conceptual models are used to elucidate the basic concepts of bifurcation buckling, snap buckling, imperfection sensitivity, load-shortening relations, and stability. The energy approach, the virtual-work approach, and mode interaction are discussed. The derivations and results are applicable to continua and finite-dimensional systems. The virtual-work and energy approaches are given separate treatments, but their equivalence is made explicit. The basic concepts of stability occupy a secondary position in the present approach.
Shape reanalysis and sensitivities utilizing preconditioned iterative boundary solvers
NASA Technical Reports Server (NTRS)
Guru Prasad, K.; Kane, J. H.
1992-01-01
The computational advantages associated with the utilization of preconditined iterative equation solvers are quantified for the reanalysis of perturbed shapes using continuum structural boundary element analysis (BEA). Both single- and multi-zone three-dimensional problems are examined. Significant reductions in computer time are obtained by making use of previously computed solution vectors and preconditioners in subsequent analyses. The effectiveness of this technique is demonstrated for the computation of shape response sensitivities required in shape optimization. Computer times and accuracies achieved using the preconditioned iterative solvers are compared with those obtained via direct solvers and implicit differentiation of the boundary integral equations. It is concluded that this approach employing preconditioned iterative equation solvers in reanalysis and sensitivity analysis can be competitive with if not superior to those involving direct solvers.
Pattern Recognition Approaches for Breast Cancer DCE-MRI Classification: A Systematic Review.
Fusco, Roberta; Sansone, Mario; Filice, Salvatore; Carone, Guglielmo; Amato, Daniela Maria; Sansone, Carlo; Petrillo, Antonella
2016-01-01
We performed a systematic review of several pattern analysis approaches for classifying breast lesions using dynamic, morphological, and textural features in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Several machine learning approaches, namely artificial neural networks (ANN), support vector machines (SVM), linear discriminant analysis (LDA), tree-based classifiers (TC), and Bayesian classifiers (BC), and features used for classification are described. The findings of a systematic review of 26 studies are presented. The sensitivity and specificity are respectively 91 and 83 % for ANN, 85 and 82 % for SVM, 96 and 85 % for LDA, 92 and 87 % for TC, and 82 and 85 % for BC. The sensitivity and specificity are respectively 82 and 74 % for dynamic features, 93 and 60 % for morphological features, 88 and 81 % for textural features, 95 and 86 % for a combination of dynamic and morphological features, and 88 and 84 % for a combination of dynamic, morphological, and other features. LDA and TC have the best performance. A combination of dynamic and morphological features gives the best performance.
Dielectrophoretic Capture and Genetic Analysis of Single Neuroblastoma Tumor Cells
Carpenter, Erica L.; Rader, JulieAnn; Ruden, Jacob; Rappaport, Eric F.; Hunter, Kristen N.; Hallberg, Paul L.; Krytska, Kate; O’Dwyer, Peter J.; Mosse, Yael P.
2014-01-01
Our understanding of the diversity of cells that escape the primary tumor and seed micrometastases remains rudimentary, and approaches for studying circulating and disseminated tumor cells have been limited by low throughput and sensitivity, reliance on single parameter sorting, and a focus on enumeration rather than phenotypic and genetic characterization. Here, we utilize a highly sensitive microfluidic and dielectrophoretic approach for the isolation and genetic analysis of individual tumor cells. We employed fluorescence labeling to isolate 208 single cells from spiking experiments conducted with 11 cell lines, including 8 neuroblastoma cell lines, and achieved a capture sensitivity of 1 tumor cell per 106 white blood cells (WBCs). Sample fixation or freezing had no detectable effect on cell capture. Point mutations were accurately detected in the whole genome amplification product of captured single tumor cells but not in negative control WBCs. We applied this approach to capture 144 single tumor cells from 10 bone marrow samples of patients suffering from neuroblastoma. In this pediatric malignancy, high-risk patients often exhibit wide-spread hematogenous metastasis, but access to primary tumor can be difficult or impossible. Here, we used flow-based sorting to pre-enrich samples with tumor involvement below 0.02%. For all patients for whom a mutation in the Anaplastic Lymphoma Kinase gene had already been detected in their primary tumor, the same mutation was detected in single cells from their marrow. These findings demonstrate a novel, non-invasive, and adaptable method for the capture and genetic analysis of single tumor cells from cancer patients. PMID:25133137
Climate Risk Assessment: Technical Guidance Manual for DoD Installations and Built Environment
2016-09-06
climate change risks to DoD installations and the built environment. The approach, which we call “decision-scaling,” reveals the core sensitivity of...DoD installations to climate change . It is designed to illuminate the sensitivity of installations and their supporting infrastructure systems...including water and energy, to climate changes and other uncertainties without dependence on climate change projections. In this way the analysis and
Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik
2017-12-15
Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.
NASA Astrophysics Data System (ADS)
Haghnegahdar, Amin; Elshamy, Mohamed; Yassin, Fuad; Razavi, Saman; Wheater, Howard; Pietroniro, Al
2017-04-01
Complex physically-based environmental models are being increasingly used as the primary tool for watershed planning and management due to advances in computation power and data acquisition. Model sensitivity analysis plays a crucial role in understanding the behavior of these complex models and improving their performance. Due to the non-linearity and interactions within these complex models, Global sensitivity analysis (GSA) techniques should be adopted to provide a comprehensive understanding of model behavior and identify its dominant controls. In this study we adopt a multi-basin multi-criteria GSA approach to systematically assess the behavior of the Modélisation Environmentale-Surface et Hydrologie (MESH) across various hydroclimatic conditions in Canada including areas in the Great Lakes Basin, Mackenzie River Basin, and South Saskatchewan River Basin. MESH is a semi-distributed physically-based coupled land surface-hydrology modelling system developed by Environment and Climate Change Canada (ECCC) for various water resources management purposes in Canada. We use a novel method, called Variogram Analysis of Response Surfaces (VARS), to perform sensitivity analysis. VARS is a variogram-based GSA technique that can efficiently provide a spectrum of sensitivity information across a range of scales within the parameter space. We use multiple metrics to identify dominant controls of model response (e.g. streamflow) to model parameters under various conditions such as high flows, low flows, and flow volume. We also investigate the influence of initial conditions on model behavior as part of this study. Our preliminary results suggest that this type of GSA can significantly help with estimating model parameters, decreasing calibration computational burden, and reducing prediction uncertainty.
Sensitivity analysis of infectious disease models: methods, advances and their application
Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.
2013-01-01
Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497
Buckling Load Calculations of the Isotropic Shell A-8 Using a High-Fidelity Hierarchical Approach
NASA Technical Reports Server (NTRS)
Arbocz, Johann; Starnes, James H.
2002-01-01
As a step towards developing a new design philosophy, one that moves away from the traditional empirical approach used today in design towards a science-based design technology approach, a test series of 7 isotropic shells carried out by Aristocrat and Babcock at Caltech is used. It is shown how the hierarchical approach to buckling load calculations proposed by Arbocz et al can be used to perform an approach often called 'high fidelity analysis', where the uncertainties involved in a design are simulated by refined and accurate numerical methods. The Delft Interactive Shell DEsign COde (short, DISDECO) is employed for this hierarchical analysis to provide an accurate prediction of the critical buckling load of the given shell structure. This value is used later as a reference to establish the accuracy of the Level-3 buckling load predictions. As a final step in the hierarchical analysis approach, the critical buckling load and the estimated imperfection sensitivity of the shell are verified by conducting an analysis using a sufficiently refined finite element model with one of the current generation two-dimensional shell analysis codes with the advanced capabilities needed to represent both geometric and material nonlinearities.
On a High-Fidelity Hierarchical Approach to Buckling Load Calculations
NASA Technical Reports Server (NTRS)
Arbocz, Johann; Starnes, James H.; Nemeth, Michael P.
2001-01-01
As a step towards developing a new design philosophy, one that moves away from the traditional empirical approach used today in design towards a science-based design technology approach, a recent test series of 5 composite shells carried out by Waters at NASA Langley Research Center is used. It is shown how the hierarchical approach to buckling load calculations proposed by Arbocz et al can be used to perform an approach often called "high fidelity analysis", where the uncertainties involved in a design are simulated by refined and accurate numerical methods. The Delft Interactive Shell DEsign COde (short, DISDECO) is employed for this hierarchical analysis to provide an accurate prediction of the critical buckling load of the given shell structure. This value is used later as a reference to establish the accuracy of the Level-3 buckling load predictions. As a final step in the hierarchical analysis approach, the critical buckling load and the estimated imperfection sensitivity of the shell are verified by conducting an analysis using a sufficiently refined finite element model with one of the current generation two-dimensional shell analysis codes with the advanced capabilities needed to represent both geometric and material nonlinearities.
Thermodynamic Analysis of a Coupled Chemical Reaction.
ERIC Educational Resources Information Center
Trimm, Harold; And Others
1979-01-01
Describes a typical relaxation kinetic experiment using a sudden increase in the temperature of the system. Time involved is described as minimal and the approach as quicker, more accurate, sensitive, and producing simultaneous determination of several thermodynamic parameters. (Author/SA)
Design and Analysis of an X-Ray Mirror Assembly Using the Meta-Shell Approach
NASA Technical Reports Server (NTRS)
McClelland, Ryan S.; Bonafede, Joseph; Saha, Timo T.; Solly, Peter M.; Zhang, William W.
2016-01-01
Lightweight and high resolution optics are needed for future space-based x-ray telescopes to achieve advances in high-energy astrophysics. Past missions such as Chandra and XMM-Newton have achieved excellent angular resolution using a full shell mirror approach. Other missions such as Suzaku and NuSTAR have achieved lightweight mirrors using a segmented approach. This paper describes a new approach, called meta-shells, which combines the fabrication advantages of segmented optics with the alignment advantages of full shell optics. Meta-shells are built by layering overlapping mirror segments onto a central structural shell. The resulting optic has the stiffness and rotational symmetry of a full shell, but with an order of magnitude greater collecting area. Several meta-shells so constructed can be integrated into a large x-ray mirror assembly by proven methods used for Chandra and XMM-Newton. The mirror segments are mounted to the meta-shell using a novel four point semi-kinematic mount. The four point mount deterministically locates the segment in its most performance sensitive degrees of freedom. Extensive analysis has been performed to demonstrate the feasibility of the four point mount and meta-shell approach. A mathematical model of a meta-shell constructed with mirror segments bonded at four points and subject to launch loads has been developed to determine the optimal design parameters, namely bond size, mirror segment span, and number of layers per meta-shell. The parameters of an example 1.3 m diameter mirror assembly are given including the predicted effective area. To verify the mathematical model and support opto-mechanical analysis, a detailed finite element model of a meta-shell was created. Finite element analysis predicts low gravity distortion and low sensitivity to thermal gradients.
Evaluation of peak-picking algorithms for protein mass spectrometry.
Bauer, Chris; Cramer, Rainer; Schuchhardt, Johannes
2011-01-01
Peak picking is an early key step in MS data analysis. We compare three commonly used approaches to peak picking and discuss their merits by means of statistical analysis. Methods investigated encompass signal-to-noise ratio, continuous wavelet transform, and a correlation-based approach using a Gaussian template. Functionality of the three methods is illustrated and discussed in a practical context using a mass spectral data set created with MALDI-TOF technology. Sensitivity and specificity are investigated using a manually defined reference set of peaks. As an additional criterion, the robustness of the three methods is assessed by a perturbation analysis and illustrated using ROC curves.
Christopher B. Dow; Brandon M. Collins; Scott L. Stephens
2016-01-01
Finding novel ways to plan and implement landscape-level forest treatments that protect sensitive wildlife and other key ecosystem components, while also reducing the risk of large-scale, high-severity fires, can prove to be difficult. We examined alternative approaches to landscape-scale fuel-treatment design for the same landscape. These approaches included two...
Skiöld, Sara; Azimzadeh, Omid; Merl-Pham, Juliane; Naslund, Ingemar; Wersall, Peter; Lidbrink, Elisabet; Tapio, Soile; Harms-Ringdahl, Mats; Haghdoost, Siamak
2015-06-01
Radiation therapy is a cornerstone of modern cancer treatment. Understanding the mechanisms behind normal tissue sensitivity is essential in order to minimize adverse side effects and yet to prevent local cancer reoccurrence. The aim of this study was to identify biomarkers of radiation sensitivity to enable personalized cancer treatment. To investigate the mechanisms behind radiation sensitivity a pilot study was made where eight radiation-sensitive and nine normo-sensitive patients were selected from a cohort of 2914 breast cancer patients, based on acute tissue reactions after radiation therapy. Whole blood was sampled and irradiated in vitro with 0, 1, or 150 mGy followed by 3 h incubation at 37°C. The leukocytes of the two groups were isolated, pooled and protein expression profiles were investigated using isotope-coded protein labeling method (ICPL). First, leukocytes from the in vitro irradiated whole blood from normo-sensitive and extremely sensitive patients were compared to the non-irradiated controls. To validate this first study a second ICPL analysis comparing only the non-irradiated samples was conducted. Both approaches showed unique proteomic signatures separating the two groups at the basal level and after doses of 1 and 150 mGy. Pathway analyses of both proteomic approaches suggest that oxidative stress response, coagulation properties and acute phase response are hallmarks of radiation sensitivity supporting our previous study on oxidative stress response. This investigation provides unique characteristics of radiation sensitivity essential for individualized radiation therapy. Copyright © 2015 Elsevier B.V. All rights reserved.
Perfetti, Christopher M.; Rearden, Bradley T.
2016-03-01
The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less
DIELECTROPHORESIS-BASED MICROFLUIDIC SEPARATION AND DETECTION SYSTEMS
Yang, Jun; Vykoukal, Jody; Noshari, Jamileh; Becker, Frederick; Gascoyne, Peter; Krulevitch, Peter; Fuller, Chris; Ackler, Harold; Hamilton, Julie; Boser, Bernhard; Eldredge, Adam; Hitchens, Duncan; Andrews, Craig
2009-01-01
Diagnosis and treatment of human diseases frequently requires isolation and detection of certain cell types from a complex mixture. Compared with traditional separation and detection techniques, microfluidic approaches promise to yield easy-to-use diagnostic instruments tolerant of a wide range of operating environments and capable of accomplishing automated analyses. These approaches will enable diagnostic advances to be disseminated from sophisticated clinical laboratories to the point-of-care. Applications will include the separation and differential analysis of blood cell subpopulations for host-based detection of blood cell changes caused by disease, infection, or exposure to toxins, and the separation and analysis of surface-sensitized, custom dielectric beads for chemical, biological, and biomolecular targets. Here we report a new particle separation and analysis microsystem that uses dielectrophoretic field-flow fractionation (DEP-FFF). The system consists of a microfluidic chip with integrated sample injector, a DEP-FFF separator, and an AC impedance sensor. We show the design of a miniaturized impedance sensor integrated circuit (IC) with improved sensitivity, a new packaging approach for micro-flumes that features a slide-together compression package and novel microfluidic interconnects, and the design, control, integration and packaging of a fieldable prototype. Illustrative applications will be shown, including the separation of different sized beads and different cell types, blood cell differential analysis, and impedance sensing results for beads, spores and cells. PMID:22025905
NASA Technical Reports Server (NTRS)
Karmali, M. S.; Phatak, A. V.
1982-01-01
Results of a study to investigate, by means of a computer simulation, the performance sensitivity of helicopter IMC DSAL operations as a function of navigation system parameters are presented. A mathematical model representing generically a navigation system is formulated. The scenario simulated consists of a straight in helicopter approach to landing along a 6 deg glideslope. The deceleration magnitude chosen is 03g. The navigation model parameters are varied and the statistics of the total system errors (TSE) computed. These statistics are used to determine the critical navigation system parameters that affect the performance of the closed-loop navigation, guidance and control system of a UH-1H helicopter.
Separating astrophysical sources from indirect dark matter signals
Siegal-Gaskins, Jennifer M.
2015-01-01
Indirect searches for products of dark matter annihilation and decay face the challenge of identifying an uncertain and subdominant signal in the presence of uncertain backgrounds. Two valuable approaches to this problem are (i) using analysis methods which take advantage of different features in the energy spectrum and angular distribution of the signal and backgrounds and (ii) more accurately characterizing backgrounds, which allows for more robust identification of possible signals. These two approaches are complementary and can be significantly strengthened when used together. I review the status of indirect searches with gamma rays using two promising targets, the Inner Galaxy and the isotropic gamma-ray background. For both targets, uncertainties in the properties of backgrounds are a major limitation to the sensitivity of indirect searches. I then highlight approaches which can enhance the sensitivity of indirect searches using these targets. PMID:25304638
Ashok, Praveen C.; Praveen, Bavishna B.; Bellini, Nicola; Riches, Andrew; Dholakia, Kishan; Herrington, C. Simon
2013-01-01
We report a multimodal optical approach using both Raman spectroscopy and optical coherence tomography (OCT) in tandem to discriminate between colonic adenocarcinoma and normal colon. Although both of these non-invasive techniques are capable of discriminating between normal and tumour tissues, they are unable individually to provide both the high specificity and high sensitivity required for disease diagnosis. We combine the chemical information derived from Raman spectroscopy with the texture parameters extracted from OCT images. The sensitivity obtained using Raman spectroscopy and OCT individually was 89% and 78% respectively and the specificity was 77% and 74% respectively. Combining the information derived using the two techniques increased both sensitivity and specificity to 94% demonstrating that combining complementary optical information enhances diagnostic accuracy. These data demonstrate that multimodal optical analysis has the potential to achieve accurate non-invasive cancer diagnosis. PMID:24156073
Mason, Alexina J; Gomes, Manuel; Grieve, Richard; Ulug, Pinar; Powell, Janet T; Carpenter, James
2017-08-01
The analyses of randomised controlled trials with missing data typically assume that, after conditioning on the observed data, the probability of missing data does not depend on the patient's outcome, and so the data are 'missing at random' . This assumption is usually implausible, for example, because patients in relatively poor health may be more likely to drop out. Methodological guidelines recommend that trials require sensitivity analysis, which is best informed by elicited expert opinion, to assess whether conclusions are robust to alternative assumptions about the missing data. A major barrier to implementing these methods in practice is the lack of relevant practical tools for eliciting expert opinion. We develop a new practical tool for eliciting expert opinion and demonstrate its use for randomised controlled trials with missing data. We develop and illustrate our approach for eliciting expert opinion with the IMPROVE trial (ISRCTN 48334791), an ongoing multi-centre randomised controlled trial which compares an emergency endovascular strategy versus open repair for patients with ruptured abdominal aortic aneurysm. In the IMPROVE trial at 3 months post-randomisation, 21% of surviving patients did not complete health-related quality of life questionnaires (assessed by EQ-5D-3L). We address this problem by developing a web-based tool that provides a practical approach for eliciting expert opinion about quality of life differences between patients with missing versus complete data. We show how this expert opinion can define informative priors within a fully Bayesian framework to perform sensitivity analyses that allow the missing data to depend upon unobserved patient characteristics. A total of 26 experts, of 46 asked to participate, completed the elicitation exercise. The elicited quality of life scores were lower on average for the patients with missing versus complete data, but there was considerable uncertainty in these elicited values. The missing at random analysis found that patients randomised to the emergency endovascular strategy versus open repair had higher average (95% credible interval) quality of life scores of 0.062 (-0.005 to 0.130). Our sensitivity analysis that used the elicited expert information as pooled priors found that the gain in average quality of life for the emergency endovascular strategy versus open repair was 0.076 (-0.054 to 0.198). We provide and exemplify a practical tool for eliciting the expert opinion required by recommended approaches to the sensitivity analyses of randomised controlled trials. We show how this approach allows the trial analysis to fully recognise the uncertainty that arises from making alternative, plausible assumptions about the reasons for missing data. This tool can be widely used in the design, analysis and interpretation of future trials, and to facilitate this, materials are available for download.
NASA Astrophysics Data System (ADS)
Effati, Meysam; Thill, Jean-Claude; Shabani, Shahin
2015-04-01
The contention of this paper is that many social science research problems are too "wicked" to be suitably studied using conventional statistical and regression-based methods of data analysis. This paper argues that an integrated geospatial approach based on methods of machine learning is well suited to this purpose. Recognizing the intrinsic wickedness of traffic safety issues, such approach is used to unravel the complexity of traffic crash severity on highway corridors as an example of such problems. The support vector machine (SVM) and coactive neuro-fuzzy inference system (CANFIS) algorithms are tested as inferential engines to predict crash severity and uncover spatial and non-spatial factors that systematically relate to crash severity, while a sensitivity analysis is conducted to determine the relative influence of crash severity factors. Different specifications of the two methods are implemented, trained, and evaluated against crash events recorded over a 4-year period on a regional highway corridor in Northern Iran. Overall, the SVM model outperforms CANFIS by a notable margin. The combined use of spatial analysis and artificial intelligence is effective at identifying leading factors of crash severity, while explicitly accounting for spatial dependence and spatial heterogeneity effects. Thanks to the demonstrated effectiveness of a sensitivity analysis, this approach produces comprehensive results that are consistent with existing traffic safety theories and supports the prioritization of effective safety measures that are geographically targeted and behaviorally sound on regional highway corridors.
Maintaining gender sensitivity in the family practice: facilitators and barriers.
Celik, Halime; Lagro-Janssen, Toine; Klinge, Ineke; van der Weijden, Trudy; Widdershoven, Guy
2009-12-01
This study aims to identify the facilitators and barriers perceived by General Practitioners (GPs) to maintain a gender perspective in family practice. Nine semi-structured interviews were conducted among nine pairs of GPs. The data were analysed by means of deductive content analysis using theory-based methods to generate facilitators and barriers to gender sensitivity. Gender sensitivity in family practice can be influenced by several factors which ultimately determine the extent to which a gender sensitive approach is satisfactorily practiced by GPs in the doctor-patient relationship. Gender awareness, repetition and reminders, motivation triggers and professional guidelines were found to facilitate gender sensitivity. On the other hand, lacking skills and routines, scepticism, heavy workload and the timing of implementation were found to be barriers to gender sensitivity. While the potential effect of each factor affecting gender sensitivity in family practice has been elucidated, the effects of the interplay between these factors still need to be determined.
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar
2016-01-01
This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.
Ultrasonic monitoring of droplets' evaporation: Application to human whole blood.
Laux, D; Ferrandis, J Y; Brutin, D
2016-09-01
During a colloidal droplet evaporation, a sol-gel transition can be observed and is described by the desiccation time τD and the gelation time τG. These characteristic times, which can be linked to viscoelastic properties of the droplet and to its composition, are classically rated by analysis of mass droplet evolution during evaporation. Even if monitoring mass evolution versus time seems straightforward, this approach is very sensitive to environmental conditions (vibrations, air flow…) as mass has to be evaluated very accurately using ultra-sensitive weighing scales. In this study we investigated the potentialities of ultrasonic shear reflectometry to assess τD and τG in a simple and reliable manner. In order to validate this approach, our study has focused on blood droplets evaporation on which a great deal of work has recently been published. Desiccation and gelation times measured with shear ultrasonic reflectometry have been perfectly correlated to values obtained from mass versus time analysis. This ultrasonic method which is not very sensitive to environmental perturbations is therefore very interesting to monitor the drying of blood droplets in a simple manner and is more generally suitable for complex fluid droplets evaporation investigation. Copyright © 2016 Elsevier B.V. All rights reserved.
Brenner, John L.; Jasiewicz, Kristen L.; Fahley, Alisha F.; Kemp, Benedict J.; Abbott, Allison L.
2010-01-01
Summary MicroRNAs (miRNAs) are small, non-coding RNAs that regulate the translation and/or the stability of their mRNA targets. Previous work showed that for most miRNA genes of C. elegans, single gene knockouts did not result in detectable mutant phenotypes [1]. This may be due, in part, to functional redundancy between miRNAs. However, in most cases, worms carrying deletions of all members of a miRNA family do not display strong mutant phenotypes [2]. They may function together with unrelated miRNAs or with non-miRNA genes in regulatory networks, possibly to ensure the robustness of developmental mechanisms. To test this, we examined worms lacking individual miRNAs in genetically sensitized backgrounds. These include genetic backgrounds with reduced processing and activity of all miRNAs or with reduced activity of a wide array of regulatory pathways [3]. Using these two approaches, mutant phenotypes were identified for 25 out of 31 miRNAs included in this analysis. Our findings describe biological roles for individual miRNAs and suggest that use of sensitized genetic backgrounds provides an efficient approach for miRNA functional analysis. PMID:20579881
Extension of the ADjoint Approach to a Laminar Navier-Stokes Solver
NASA Astrophysics Data System (ADS)
Paige, Cody
The use of adjoint methods is common in computational fluid dynamics to reduce the cost of the sensitivity analysis in an optimization cycle. The forward mode ADjoint is a combination of an adjoint sensitivity analysis method with a forward mode automatic differentiation (AD) and is a modification of the reverse mode ADjoint method proposed by Mader et al.[1]. A colouring acceleration technique is presented to reduce the computational cost increase associated with forward mode AD. The forward mode AD facilitates the implementation of the laminar Navier-Stokes (NS) equations. The forward mode ADjoint method is applied to a three-dimensional computational fluid dynamics solver. The resulting Euler and viscous ADjoint sensitivities are compared to the reverse mode Euler ADjoint derivatives and a complex-step method to demonstrate the reduced computational cost and accuracy. Both comparisons demonstrate the benefits of the colouring method and the practicality of using a forward mode AD. [1] Mader, C.A., Martins, J.R.R.A., Alonso, J.J., and van der Weide, E. (2008) ADjoint: An approach for the rapid development of discrete adjoint solvers. AIAA Journal, 46(4):863-873. doi:10.2514/1.29123.
Sigoillot, Frederic D; Huckins, Jeremy F; Li, Fuhai; Zhou, Xiaobo; Wong, Stephen T C; King, Randall W
2011-01-01
Automated time-lapse microscopy can visualize proliferation of large numbers of individual cells, enabling accurate measurement of the frequency of cell division and the duration of interphase and mitosis. However, extraction of quantitative information by manual inspection of time-lapse movies is too time-consuming to be useful for analysis of large experiments. Here we present an automated time-series approach that can measure changes in the duration of mitosis and interphase in individual cells expressing fluorescent histone 2B. The approach requires analysis of only 2 features, nuclear area and average intensity. Compared to supervised learning approaches, this method reduces processing time and does not require generation of training data sets. We demonstrate that this method is as sensitive as manual analysis in identifying small changes in interphase or mitotic duration induced by drug or siRNA treatment. This approach should facilitate automated analysis of high-throughput time-lapse data sets to identify small molecules or gene products that influence timing of cell division.
NASA Astrophysics Data System (ADS)
Guadagnini, A.; Riva, M.; Dell'Oca, A.
2017-12-01
We propose to ground sensitivity of uncertain parameters of environmental models on a set of indices based on the main (statistical) moments, i.e., mean, variance, skewness and kurtosis, of the probability density function (pdf) of a target model output. This enables us to perform Global Sensitivity Analysis (GSA) of a model in terms of multiple statistical moments and yields a quantification of the impact of model parameters on features driving the shape of the pdf of model output. Our GSA approach includes the possibility of being coupled with the construction of a reduced complexity model that allows approximating the full model response at a reduced computational cost. We demonstrate our approach through a variety of test cases. These include a commonly used analytical benchmark, a simplified model representing pumping in a coastal aquifer, a laboratory-scale tracer experiment, and the migration of fracturing fluid through a naturally fractured reservoir (source) to reach an overlying formation (target). Our strategy allows discriminating the relative importance of model parameters to the four statistical moments considered. We also provide an appraisal of the error associated with the evaluation of our sensitivity metrics by replacing the original system model through the selected surrogate model. Our results suggest that one might need to construct a surrogate model with increasing level of accuracy depending on the statistical moment considered in the GSA. The methodological framework we propose can assist the development of analysis techniques targeted to model calibration, design of experiment, uncertainty quantification and risk assessment.
Microfluidic single-cell whole-transcriptome sequencing.
Streets, Aaron M; Zhang, Xiannian; Cao, Chen; Pang, Yuhong; Wu, Xinglong; Xiong, Liang; Yang, Lu; Fu, Yusi; Zhao, Liang; Tang, Fuchou; Huang, Yanyi
2014-05-13
Single-cell whole-transcriptome analysis is a powerful tool for quantifying gene expression heterogeneity in populations of cells. Many techniques have, thus, been recently developed to perform transcriptome sequencing (RNA-Seq) on individual cells. To probe subtle biological variation between samples with limiting amounts of RNA, more precise and sensitive methods are still required. We adapted a previously developed strategy for single-cell RNA-Seq that has shown promise for superior sensitivity and implemented the chemistry in a microfluidic platform for single-cell whole-transcriptome analysis. In this approach, single cells are captured and lysed in a microfluidic device, where mRNAs with poly(A) tails are reverse-transcribed into cDNA. Double-stranded cDNA is then collected and sequenced using a next generation sequencing platform. We prepared 94 libraries consisting of single mouse embryonic cells and technical replicates of extracted RNA and thoroughly characterized the performance of this technology. Microfluidic implementation increased mRNA detection sensitivity as well as improved measurement precision compared with tube-based protocols. With 0.2 M reads per cell, we were able to reconstruct a majority of the bulk transcriptome with 10 single cells. We also quantified variation between and within different types of mouse embryonic cells and found that enhanced measurement precision, detection sensitivity, and experimental throughput aided the distinction between biological variability and technical noise. With this work, we validated the advantages of an early approach to single-cell RNA-Seq and showed that the benefits of combining microfluidic technology with high-throughput sequencing will be valuable for large-scale efforts in single-cell transcriptome analysis.
SCALE 6.2 Continuous-Energy TSUNAMI-3D Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2015-01-01
The TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation) capabilities within the SCALE code system make use of sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different systems, quantifying computational biases, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved ease of use and fidelity and the desire to extend TSUNAMI analysis to advanced applications have motivated the development of a SCALE 6.2 module for calculating sensitivity coefficients using three-dimensional (3D) continuous-energy (CE) Montemore » Carlo methods: CE TSUNAMI-3D. This paper provides an overview of the theory, implementation, and capabilities of the CE TSUNAMI-3D sensitivity analysis methods. CE TSUNAMI contains two methods for calculating sensitivity coefficients in eigenvalue sensitivity applications: (1) the Iterated Fission Probability (IFP) method and (2) the Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization (CLUTCH) method. This work also presents the GEneralized Adjoint Response in Monte Carlo method (GEAR-MC), a first-of-its-kind approach for calculating adjoint-weighted, generalized response sensitivity coefficients—such as flux responses or reaction rate ratios—in CE Monte Carlo applications. The accuracy and efficiency of the CE TSUNAMI-3D eigenvalue sensitivity methods are assessed from a user perspective in a companion publication, and the accuracy and features of the CE TSUNAMI-3D GEAR-MC methods are detailed in this paper.« less
Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.
2016-01-01
An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.
NASA Astrophysics Data System (ADS)
Zdora, M.-C.; Thibault, P.; Deyhle, H.; Vila-Comamala, J.; Rau, C.; Zanette, I.
2018-05-01
X-ray phase-contrast and dark-field imaging provides valuable, complementary information about the specimen under study. Among the multimodal X-ray imaging methods, X-ray grating interferometry and speckle-based imaging have drawn particular attention, which, however, in their common implementations incur certain limitations that can restrict their range of applications. Recently, the unified modulated pattern analysis (UMPA) approach was proposed to overcome these limitations and combine grating- and speckle-based imaging in a single approach. Here, we demonstrate the multimodal imaging capabilities of UMPA and highlight its tunable character regarding spatial resolution, signal sensitivity and scan time by using different reconstruction parameters.
Single-cell analysis of radiotracers' uptake by fluorescence microscopy: direct and droplet approach
NASA Astrophysics Data System (ADS)
Gallina, M. E.; Kim, T. J.; Vasquez, J.; Tuerkcan, S.; Abbyad, P.; Pratx, G.
2017-02-01
Radionuclides are used for sensitive and specific detection of small molecules in vivo and in vitro. Recently, radioluminescence microscopy extended their use to single-cell studies. Here we propose a new single-cell radioisotopic assay that improves throughput while adding sorting capabilities. The new method uses fluorescence-based sensor for revealing single-cell interactions with radioactive molecular markers. This study focuses on comparing two different experimental approaches. Several probes were tested and Dihydrorhodamine 123 was selected as the best compromise between sensitivity, brightness and stability. The sensor was incorporated either directly within the cell cytoplasm (direct approach), or it was coencapsulated with radiolabeled single-cells in oil-dispersed water droplets (droplet approach). Both approaches successfully activated the fluorescence signal following cellular uptake of 18F-fluorodeoxyglucose (FDG) and external Xrays exposure. The direct approach offered single-cell resolution and longtime stability ( > 20 hours), moreover it could discriminate FDG uptake at labelling concentration as low as 300 μCi/ml. In cells incubated with Dihydrorhodamine 123 after exposure to high radiation doses (8-16 Gy), the fluorescence signal was found to increase with the depletion of ROS quenchers. On the other side, the droplet approach required higher labelling concentrations (1.00 mCi/ml), and, at the current state of art, three cells per droplet are necessary to produce a fluorescent signal. This approach, however, is independent on cellular oxidative stress and, with further improvements, will be more suitable for studying heterogeneous populations. We anticipate this technology to pave the way for the analysis of single-cell interactions with radiomarkers by radiofluorogenic-activated single-cell sorting.
NASA Astrophysics Data System (ADS)
Ney, Michael; Abdulhalim, Ibrahim
2016-03-01
Skin cancer detection at its early stages has been the focus of a large number of experimental and theoretical studies during the past decades. Among these studies two prominent approaches presenting high potential are reflectometric sensing at the THz wavelengths region and polarimetric imaging techniques in the visible wavelengths. While THz radiation contrast agent and source of sensitivity to cancer related tissue alterations was considered to be mainly the elevated water content in the cancerous tissue, the polarimetric approach has been verified to enable cancerous tissue differentiation based on cancer induced structural alterations to the tissue. Combining THz with the polarimetric approach, which is considered in this study, is examined in order to enable higher detection sensitivity than previously pure reflectometric THz measurements. For this, a comprehensive MC simulation of radiative transfer in a complex skin tissue model fitted for the THz domain that considers the skin`s stratified structure, tissue material optical dispersion modeling, surface roughness, scatterers, and substructure organelles has been developed. Additionally, a narrow beam Mueller matrix differential analysis technique is suggested for assessing skin cancer induced changes in the polarimetric image, enabling the tissue model and MC simulation to be utilized for determining the imaging parameters resulting in maximal detection sensitivity.
Analysis of Publically Available Skin Sensitization Data from REACH Registrations 2008–2014
Luechtefeld, Thomas; Maertens, Alexandra; Russo, Daniel P.; Rovida, Costanza; Zhu, Hao; Hartung, Thomas
2017-01-01
Summary The public data on skin sensitization from REACH registrations already included 19,111 studies on skin sensitization in December 2014, making it the largest repository of such data so far (1,470 substances with mouse LLNA, 2,787 with GPMT, 762 with both in vivo and in vitro and 139 with only in vitro data). 21% were classified as sensitizers. The extracted skin sensitization data was analyzed to identify relationships in skin sensitization guidelines, visualize structural relationships of sensitizers, and build models to predict sensitization. A chemical with molecular weight > 500 Da is generally considered non-sensitizing owing to low bioavailability, but 49 sensitizing chemicals with a molecular weight > 500 Da were found. A chemical similarity map was produced using PubChem’s 2D Tanimoto similarity metric and Gephi force layout visualization. Nine clusters of chemicals were identified by Blondel’s module recognition algorithm revealing wide module-dependent variation. Approximately 31% of mapped chemicals are Michael’s acceptors but alone this does not imply skin sensitization. A simple sensitization model using molecular weight and five ToxTree structural alerts showed a balanced accuracy of 65.8% (specificity 80.4%, sensitivity 51.4%), demonstrating that structural alerts have information value. A simple variant of k-nearest neighbors outperformed the ToxTree approach even at 75% similarity threshold (82% balanced accuracy at 0.95 threshold). At higher thresholds, the balanced accuracy increased. Lower similarity thresholds decrease sensitivity faster than specificity. This analysis scopes the landscape of chemical skin sensitization, demonstrating the value of large public datasets for health hazard prediction. PMID:26863411
Direct magnetic field estimation based on echo planar raw data.
Testud, Frederik; Splitthoff, Daniel Nicolas; Speck, Oliver; Hennig, Jürgen; Zaitsev, Maxim
2010-07-01
Gradient recalled echo echo planar imaging is widely used in functional magnetic resonance imaging. The fast data acquisition is, however, very sensitive to field inhomogeneities which manifest themselves as artifacts in the images. Typically used correction methods have the common deficit that the data for the correction are acquired only once at the beginning of the experiment, assuming the field inhomogeneity distribution B(0) does not change over the course of the experiment. In this paper, methods to extract the magnetic field distribution from the acquired k-space data or from the reconstructed phase image of a gradient echo planar sequence are compared and extended. A common derivation for the presented approaches provides a solid theoretical basis, enables a fair comparison and demonstrates the equivalence of the k-space and the image phase based approaches. The image phase analysis is extended here to calculate the local gradient in the readout direction and improvements are introduced to the echo shift analysis, referred to here as "k-space filtering analysis." The described methods are compared to experimentally acquired B(0) maps in phantoms and in vivo. The k-space filtering analysis presented in this work demonstrated to be the most sensitive method to detect field inhomogeneities.
Brawanski, Alexander
2017-01-01
Multimodal brain monitoring has been utilized to optimize treatment of patients with critical neurological diseases. However, the amount of data requires an integrative tool set to unmask pathological events in a timely fashion. Recently we have introduced a mathematical model allowing the simulation of pathophysiological conditions such as reduced intracranial compliance and impaired autoregulation. Utilizing a mathematical tool set called selected correlation analysis (sca), correlation patterns, which indicate impaired autoregulation, can be detected in patient data sets (scp). In this study we compared the results of the sca with the pressure reactivity index (PRx), an established marker for impaired autoregulation. Mean PRx values were significantly higher in time segments identified as scp compared to segments showing no selected correlations (nsc). The sca based approach predicted cerebral autoregulation failure with a sensitivity of 78.8% and a specificity of 62.6%. Autoregulation failure, as detected by the results of both analysis methods, was significantly correlated with poor outcome. Sca of brain monitoring data detects impaired autoregulation with high sensitivity and sufficient specificity. Since the sca approach allows the simultaneous detection of both major pathological conditions, disturbed autoregulation and reduced compliance, it may become a useful analysis tool for brain multimodal monitoring data. PMID:28255331
Proescholdt, Martin A; Faltermeier, Rupert; Bele, Sylvia; Brawanski, Alexander
2017-01-01
Multimodal brain monitoring has been utilized to optimize treatment of patients with critical neurological diseases. However, the amount of data requires an integrative tool set to unmask pathological events in a timely fashion. Recently we have introduced a mathematical model allowing the simulation of pathophysiological conditions such as reduced intracranial compliance and impaired autoregulation. Utilizing a mathematical tool set called selected correlation analysis (sca), correlation patterns, which indicate impaired autoregulation, can be detected in patient data sets (scp). In this study we compared the results of the sca with the pressure reactivity index (PRx), an established marker for impaired autoregulation. Mean PRx values were significantly higher in time segments identified as scp compared to segments showing no selected correlations (nsc). The sca based approach predicted cerebral autoregulation failure with a sensitivity of 78.8% and a specificity of 62.6%. Autoregulation failure, as detected by the results of both analysis methods, was significantly correlated with poor outcome. Sca of brain monitoring data detects impaired autoregulation with high sensitivity and sufficient specificity. Since the sca approach allows the simultaneous detection of both major pathological conditions, disturbed autoregulation and reduced compliance, it may become a useful analysis tool for brain multimodal monitoring data.
NASA Astrophysics Data System (ADS)
Khorashadi Zadeh, Farkhondeh; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy
2017-04-01
Parameter estimation is a major concern in hydrological modeling, which may limit the use of complex simulators with a large number of parameters. To support the selection of parameters to include in or exclude from the calibration process, Global Sensitivity Analysis (GSA) is widely applied in modeling practices. Based on the results of GSA, the influential and the non-influential parameters are identified (i.e. parameters screening). Nevertheless, the choice of the screening threshold below which parameters are considered non-influential is a critical issue, which has recently received more attention in GSA literature. In theory, the sensitivity index of a non-influential parameter has a value of zero. However, since numerical approximations, rather than analytical solutions, are utilized in GSA methods to calculate the sensitivity indices, small but non-zero indices may be obtained for the indices of non-influential parameters. In order to assess the threshold that identifies non-influential parameters in GSA methods, we propose to calculate the sensitivity index of a "dummy parameter". This dummy parameter has no influence on the model output, but will have a non-zero sensitivity index, representing the error due to the numerical approximation. Hence, the parameters whose indices are above the sensitivity index of the dummy parameter can be classified as influential, whereas the parameters whose indices are below this index are within the range of the numerical error and should be considered as non-influential. To demonstrated the effectiveness of the proposed "dummy parameter approach", 26 parameters of a Soil and Water Assessment Tool (SWAT) model are selected to be analyzed and screened, using the variance-based Sobol' and moment-independent PAWN methods. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. Moreover, the calculation does not even require additional model evaluations for the Sobol' method. A formal statistical test validates these parameter screening results. Based on the dummy parameter screening, 11 model parameters are identified as influential. Therefore, it can be denoted that the "dummy parameter approach" can facilitate the parameter screening process and provide guidance for GSA users to define a screening-threshold, with only limited additional resources. Key words: Parameter screening, Global sensitivity analysis, Dummy parameter, Variance-based method, Moment-independent method
von Bargen, Christoph; Dojahn, Jörg; Waidelich, Dietmar; Humpf, Hans-Ulrich; Brockmeyer, Jens
2013-12-11
The accidental or fraudulent blending of meat from different species is a highly relevant aspect for food product quality control, especially for consumers with ethical concerns against species, such as horse or pork. In this study, we present a sensitive mass spectrometrical approach for the detection of trace contaminations of horse meat and pork and demonstrate the specificity of the identified biomarker peptides against chicken, lamb, and beef. Biomarker peptides were identified by a shotgun proteomic approach using tryptic digests of protein extracts and were verified by the analysis of 21 different meat samples from the 5 species included in this study. For the most sensitive peptides, a multiple reaction monitoring (MRM) method was developed that allows for the detection of 0.55% horse or pork in a beef matrix. To enhance sensitivity, we applied MRM(3) experiments and were able to detect down to 0.13% pork contamination in beef. To the best of our knowledge, we present here the first rapid and sensitive mass spectrometrical method for the detection of horse and pork by use of MRM and MRM(3).
Bernstein, Joshua G.W.; Mehraei, Golbarg; Shamma, Shihab; Gallun, Frederick J.; Theodoroff, Sarah M.; Leek, Marjorie R.
2014-01-01
Background A model that can accurately predict speech intelligibility for a given hearing-impaired (HI) listener would be an important tool for hearing-aid fitting or hearing-aid algorithm development. Existing speech-intelligibility models do not incorporate variability in suprathreshold deficits that are not well predicted by classical audiometric measures. One possible approach to the incorporation of such deficits is to base intelligibility predictions on sensitivity to simultaneously spectrally and temporally modulated signals. Purpose The likelihood of success of this approach was evaluated by comparing estimates of spectrotemporal modulation (STM) sensitivity to speech intelligibility and to psychoacoustic estimates of frequency selectivity and temporal fine-structure (TFS) sensitivity across a group of HI listeners. Research Design The minimum modulation depth required to detect STM applied to an 86 dB SPL four-octave noise carrier was measured for combinations of temporal modulation rate (4, 12, or 32 Hz) and spectral modulation density (0.5, 1, 2, or 4 cycles/octave). STM sensitivity estimates for individual HI listeners were compared to estimates of frequency selectivity (measured using the notched-noise method at 500, 1000measured using the notched-noise method at 500, 2000, and 4000 Hz), TFS processing ability (2 Hz frequency-modulation detection thresholds for 500, 10002 Hz frequency-modulation detection thresholds for 500, 2000, and 4000 Hz carriers) and sentence intelligibility in noise (at a 0 dB signal-to-noise ratio) that were measured for the same listeners in a separate study. Study Sample Eight normal-hearing (NH) listeners and 12 listeners with a diagnosis of bilateral sensorineural hearing loss participated. Data Collection and Analysis STM sensitivity was compared between NH and HI listener groups using a repeated-measures analysis of variance. A stepwise regression analysis compared STM sensitivity for individual HI listeners to audiometric thresholds, age, and measures of frequency selectivity and TFS processing ability. A second stepwise regression analysis compared speech intelligibility to STM sensitivity and the audiogram-based Speech Intelligibility Index. Results STM detection thresholds were elevated for the HI listeners, but only for low rates and high densities. STM sensitivity for individual HI listeners was well predicted by a combination of estimates of frequency selectivity at 4000 Hz and TFS sensitivity at 500 Hz but was unrelated to audiometric thresholds. STM sensitivity accounted for an additional 40% of the variance in speech intelligibility beyond the 40% accounted for by the audibility-based Speech Intelligibility Index. Conclusions Impaired STM sensitivity likely results from a combination of a reduced ability to resolve spectral peaks and a reduced ability to use TFS information to follow spectral-peak movements. Combining STM sensitivity estimates with audiometric threshold measures for individual HI listeners provided a more accurate prediction of speech intelligibility than audiometric measures alone. These results suggest a significant likelihood of success for an STM-based model of speech intelligibility for HI listeners. PMID:23636210
NASA Astrophysics Data System (ADS)
Gan, Yanjun; Liang, Xin-Zhong; Duan, Qingyun; Choi, Hyun Il; Dai, Yongjiu; Wu, Huan
2015-06-01
An uncertainty quantification framework was employed to examine the sensitivities of 24 model parameters from a newly developed Conjunctive Surface-Subsurface Process (CSSP) land surface model (LSM). The sensitivity analysis (SA) was performed over 18 representative watersheds in the contiguous United States to examine the influence of model parameters in the simulation of terrestrial hydrological processes. Two normalized metrics, relative bias (RB) and Nash-Sutcliffe efficiency (NSE), were adopted to assess the fit between simulated and observed streamflow discharge (SD) and evapotranspiration (ET) for a 14 year period. SA was conducted using a multiobjective two-stage approach, in which the first stage was a qualitative SA using the Latin Hypercube-based One-At-a-Time (LH-OAT) screening, and the second stage was a quantitative SA using the Multivariate Adaptive Regression Splines (MARS)-based Sobol' sensitivity indices. This approach combines the merits of qualitative and quantitative global SA methods, and is effective and efficient for understanding and simplifying large, complex system models. Ten of the 24 parameters were identified as important across different watersheds. The contribution of each parameter to the total response variance was then quantified by Sobol' sensitivity indices. Generally, parameter interactions contribute the most to the response variance of the CSSP, and only 5 out of 24 parameters dominate model behavior. Four photosynthetic and respiratory parameters are shown to be influential to ET, whereas reference depth for saturated hydraulic conductivity is the most influential parameter for SD in most watersheds. Parameter sensitivity patterns mainly depend on hydroclimatic regime, as well as vegetation type and soil texture. This article was corrected on 26 JUN 2015. See the end of the full text for details.
SPR based hybrid electro-optic biosensor for β-lactam antibiotics determination in water
NASA Astrophysics Data System (ADS)
Galatus, Ramona; Feier, Bogdan; Cristea, Cecilia; Cennamo, Nunzio; Zeni, Luigi
2017-09-01
The present work aims to provide a hybrid platform capable of complementary and sensitive detection of β-lactam antibiotics, ampicillin in particular. The use of an aptamer specific to ampicillin assures good selectivity and sensitivity for the detection of ampicillin from different matrice. This new approach is dedicated for a portable, remote sensing platform based on low-cost, small size and low-power consumption solution. The simple experimental hybrid platform integrates the results from the D-shape surface plasmon resonance plastic optical fiber (SPR-POF) and from the electrochemical (bio)sensor, for the analysis of ampicillin, delivering sensitive and reliable results. The SPR-POF already used in many previous applications is embedded in a new experimental setup with fluorescent fibers emitters, for broadband wavelength analysis, low-power consumption and low-heating capabilities of the sensing platform.
Simulation Modeling of Software Development Processes
NASA Technical Reports Server (NTRS)
Calavaro, G. F.; Basili, V. R.; Iazeolla, G.
1996-01-01
A simulation modeling approach is proposed for the prediction of software process productivity indices, such as cost and time-to-market, and the sensitivity analysis of such indices to changes in the organization parameters and user requirements. The approach uses a timed Petri Net and Object Oriented top-down model specification. Results demonstrate the model representativeness, and its usefulness in verifying process conformance to expectations, and in performing continuous process improvement and optimization.
DNA-mounted self-assembly: new approaches for genomic analysis and SNP detection.
Bichenkova, Elena V; Lang, Zhaolei; Yu, Xuan; Rogert, Candelaria; Douglas, Kenneth T
2011-01-01
This article presents an overview of new emerging approaches for nucleic acid detection via hybridization techniques that can potentially be applied to genomic analysis and SNP identification in clinical diagnostics. Despite the availability of a diverse variety of SNP genotyping technologies on the diagnostic market, none has truly succeeded in dominating its competitors thus far. Having been designed for specific diagnostic purposes or clinical applications, each of the existing bio-assay systems (briefly outlined here) is usually limited to a relatively narrow aspect or format of nucleic acid detection, and thus cannot entirely satisfy all the varieties of commercial requirements and clinical demands. This drives the diagnostic sector to pursue novel, cost-effective approaches to ensure rapid and reliable identification of pathogenic or hereditary human diseases. Hence, the purpose of this review is to highlight some new strategic directions in DNA detection technologies in order to inspire development of novel molecular diagnostic tools and bio-assay systems with superior reliability, reproducibility, robustness, accuracy and sensitivity at lower assay cost. One approach to improving the sensitivity of an assay to confidently discriminate between single point mutations is based on the use of target assembled, split-probe systems, which constitutes the main focus of this review. Copyright © 2010 Elsevier B.V. All rights reserved.
Becker, M; Zweckmair, T; Forneck, A; Rosenau, T; Potthast, A; Liebner, F
2013-03-15
Gas chromatographic analysis of complex carbohydrate mixtures requires highly effective and reliable derivatisation strategies for successful separation, identification, and quantitation of all constituents. Different single-step (per-trimethylsilylation, isopropylidenation) and two-step approaches (ethoximation-trimethylsilylation, ethoximation-trifluoroacetylation, benzoximation-trimethylsilylation, benzoximation-trifluoroacetylation) have been comprehensively studied with regard to chromatographic characteristics, informational value of mass spectra, ease of peak assignment, robustness toward matrix effects, and quantitation using a set of reference compounds that comprise eight monosaccharides (C(5)-C(6)), glycolaldehyde, and dihydroxyacetone. It has been shown that isopropylidenation and the two oximation-trifluoroacetylation approaches are least suitable for complex carbohydrate matrices. Whereas the former is limited to compounds that contain vicinal dihydroxy moieties in cis configuration, the latter two methods are sensitive to traces of trifluoroacetic acid which strongly supports decomposition of ketohexoses. It has been demonstrated for two "real" carbohydrate-rich matrices of biological and synthetic origin, respectively, that two-step ethoximation-trimethylsilylation is superior to other approaches due to the low number of peaks obtained per carbohydrate, good peak separation performance, structural information of mass spectra, low limits of detection and quantitation, minor relative standard deviations, and low sensitivity toward matrix effects. Copyright © 2013 Elsevier B.V. All rights reserved.
Feng, Sheng; Shi, Jun; Parrott, Neil; Hu, Pei; Weber, Cornelia; Martin-Facklam, Meret; Saito, Tomohisa; Peck, Richard
2016-07-01
We propose a strategy for studying ethnopharmacology by conducting sequential physiologically based pharmacokinetic (PBPK) prediction (a 'bottom-up' approach) and population pharmacokinetic (popPK) confirmation (a 'top-down' approach), or in reverse order, depending on whether the purpose is ethnic effect assessment for a new molecular entity under development or a tool for ethnic sensitivity prediction for a given pathway. The strategy is exemplified with bitopertin. A PBPK model was built using Simcyp(®) to simulate the pharmacokinetics of bitopertin and to predict the ethnic sensitivity in clearance, given pharmacokinetic data in just one ethnicity. Subsequently, a popPK model was built using NONMEM(®) to assess the effect of ethnicity on clearance, using human data from multiple ethnic groups. A comparison was made to confirm the PBPK-based ethnic sensitivity prediction, using the results of the popPK analysis. PBPK modelling predicted that the bitopertin geometric mean clearance values after 20 mg oral administration in Caucasians would be 1.32-fold and 1.27-fold higher than the values in Chinese and Japanese, respectively. The ratios of typical clearance in Caucasians to the values in Chinese and Japanese estimated by popPK analysis were 1.20 and 1.17, respectively. The popPK analysis results were similar to the PBPK modelling results. As a general framework, we propose that PBPK modelling should be considered to predict ethnic sensitivity of pharmacokinetics prior to any human data and/or with data in only one ethnicity. In some cases, this will be sufficient to guide initial dose selection in different ethnicities. After clinical trials in different ethnicities, popPK analysis can be used to confirm ethnic differences and to support dose justification and labelling. PBPK modelling prediction and popPK analysis confirmation can complement each other to assess ethnic differences in pharmacokinetics at different drug development stages.
Characterization of emission microscopy and liquid crystal thermography in IC fault localization
NASA Astrophysics Data System (ADS)
Lau, C. K.; Sim, K. S.
2013-05-01
This paper characterizes two fault localization techniques - Emission Microscopy (EMMI) and Liquid Crystal Thermography (LCT) by using integrated circuit (IC) leakage failures. The majority of today's semiconductor failures do not reveal a clear visual defect on the die surface and therefore require fault localization tools to identify the fault location. Among the various fault localization tools, liquid crystal thermography and frontside emission microscopy are commonly used in most semiconductor failure analysis laboratories. Many people misunderstand that both techniques are the same and both are detecting hot spot in chip failing with short or leakage. As a result, analysts tend to use only LCT since this technique involves very simple test setup compared to EMMI. The omission of EMMI as the alternative technique in fault localization always leads to incomplete analysis when LCT fails to localize any hot spot on a failing chip. Therefore, this research was established to characterize and compare both the techniques in terms of their sensitivity in detecting the fault location in common semiconductor failures. A new method was also proposed as an alternative technique i.e. the backside LCT technique. The research observed that both techniques have successfully detected the defect locations resulted from the leakage failures. LCT wass observed more sensitive than EMMI in the frontside analysis approach. On the other hand, EMMI performed better in the backside analysis approach. LCT was more sensitive in localizing ESD defect location and EMMI was more sensitive in detecting non ESD defect location. Backside LCT was proven to work as effectively as the frontside LCT and was ready to serve as an alternative technique to the backside EMMI. The research confirmed that LCT detects heat generation and EMMI detects photon emission (recombination radiation). The analysis results also suggested that both techniques complementing each other in the IC fault localization. It is necessary for a failure analyst to use both techniques when one of the techniques produces no result.
Sensitivity analysis of periodic errors in heterodyne interferometry
NASA Astrophysics Data System (ADS)
Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony
2011-03-01
Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Shirong; Davis, Michael J.; Skodje, Rex T.
2015-11-12
The sensitivity of kinetic observables is analyzed using a newly developed sum over histories representation of chemical kinetics. In the sum over histories representation, the concentrations of the chemical species are decomposed into the sum of probabilities for chemical pathways that follow molecules from reactants to products or intermediates. Unlike static flux methods for reaction path analysis, the sum over histories approach includes the explicit time dependence of the pathway probabilities. Using the sum over histories representation, the sensitivity of an observable with respect to a kinetic parameter such as a rate coefficient is then analyzed in terms of howmore » that parameter affects the chemical pathway probabilities. The method is illustrated for species concentration target functions in H-2 combustion where the rate coefficients are allowed to vary over their associated uncertainty ranges. It is found that large sensitivities are often associated with rate limiting steps along important chemical pathways or by reactions that control the branching of reactive flux« less
Perturbation analysis for patch occupancy dynamics
Martin, Julien; Nichols, James D.; McIntyre, Carol L.; Ferraz, Goncalo; Hines, James E.
2009-01-01
Perturbation analysis is a powerful tool to study population and community dynamics. This article describes expressions for sensitivity metrics reflecting changes in equilibrium occupancy resulting from small changes in the vital rates of patch occupancy dynamics (i.e., probabilities of local patch colonization and extinction). We illustrate our approach with a case study of occupancy dynamics of Golden Eagle (Aquila chrysaetos) nesting territories. Examination of the hypothesis of system equilibrium suggests that the system satisfies equilibrium conditions. Estimates of vital rates obtained using patch occupancy models are used to estimate equilibrium patch occupancy of eagles. We then compute estimates of sensitivity metrics and discuss their implications for eagle population ecology and management. Finally, we discuss the intuition underlying our sensitivity metrics and then provide examples of ecological questions that can be addressed using perturbation analyses. For instance, the sensitivity metrics lead to predictions about the relative importance of local colonization and local extinction probabilities in influencing equilibrium occupancy for rare and common species.
Tian, Ruijun; Jin, Jing; Taylor, Lorne; Larsen, Brett; Quaggin, Susan E; Pawson, Tony
2013-04-01
Gangliosides are ubiquitous components of cell membranes. Their interactions with bacterial toxins and membrane-associated proteins (e.g. receptor tyrosine kinases) have important roles in the regulation of multiple cellular functions. Currently, an effective approach for measuring ganglioside-protein interactions especially in a large-scale fashion is largely missing. To this end, we report a facile MS-based approach to explore gangliosides extracted from cells and measure their interactions with protein of interest globally. We optimized a two-step protocol for extracting total gangliosides from cells within 2 h. Easy-to-use magnetic beads conjugated with a protein of interest were used to capture interacting gangliosides. To measure ganglioside-protein interaction on a global scale, we applied a high-sensitive LC-MS system, containing hydrophilic interaction LC separation and multiple reaction monitoring-based MS for ganglioside detection. Sensitivity for ganglioside GM1 is below 100 pg, and the whole analysis can be done in 20 min with isocratic elution. To measure ganglioside interactions with soluble vascular endothelial growth factor receptor 1 (sFlt1), we extracted and readily detected 36 species of gangliosides from perivascular retinal pigment epithelium cells across eight different classes. Twenty-three ganglioside species have significant interactions with sFlt1 as compared with IgG control based on p value cutoff <0.05. These results show that the described method provides a rapid and high-sensitive approach for systematically measuring ganglioside-protein interactions. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Tsao, Chia-Wen; Yang, Zhi-Jie
2015-10-14
Desorption/ionization on silicon (DIOS) is a high-performance matrix-free mass spectrometry (MS) analysis method that involves using silicon nanostructures as a matrix for MS desorption/ionization. In this study, gold nanoparticles grafted onto a nanostructured silicon (AuNPs-nSi) surface were demonstrated as a DIOS-MS analysis approach with high sensitivity and high detection specificity for glucose detection. A glucose sample deposited on the AuNPs-nSi surface was directly catalyzed to negatively charged gluconic acid molecules on a single AuNPs-nSi chip for MS analysis. The AuNPs-nSi surface was fabricated using two electroless deposition steps and one electroless etching step. The effects of the electroless fabrication parameters on the glucose detection efficiency were evaluated. Practical application of AuNPs-nSi MS glucose analysis in urine samples was also demonstrated in this study.
NASA Astrophysics Data System (ADS)
Ha, Taesung
A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.
Cost Effectiveness of Field Trauma Triage among Injured Adults Served by Emergency Medical Services
Newgard, Craig D; Yang, Zhuo; Nishijima, Daniel; McConnell, K John; Trent, Stacy; Holmes, James F; Daya, Mohamud; Mann, N Clay; Hsia, Renee Y; Rea, Tom; Wang, N Ewen; Staudenmayer, Kristan; Delgado, M Kit
2016-01-01
Background The American College of Surgeons Committee on Trauma sets national targets for the accuracy of field trauma triage at ≥ 95% sensitivity and ≥ 65% specificity, yet the cost-effectiveness of realizing these goals is unknown. We evaluated the cost-effectiveness of current field trauma triage practices compared to triage strategies consistent with the national targets. Study Design This was a cost-effectiveness analysis using data from 79,937 injured adults transported by 48 emergency medical services (EMS) agencies to 105 trauma and non-trauma hospitals in 6 regions of the Western U.S. from 2006 through 2008. Incremental differences in survival, quality adjusted life years (QALYs), costs, and the incremental cost-effectiveness ratio (ICER; costs per QALY gained) were estimated for each triage strategy over a 1-year and lifetime horizon using a decision analytic Markov model. We considered an ICER threshold of less than $100,000 to be cost-effective. Results For these 6 regions, a high sensitivity triage strategy consistent with national trauma policy (sensitivity 98.6%, specificity 17.1%) would cost $1,317,333 per QALY gained, while current triage practices (sensitivity 87.2%, specificity 64.0%) cost $88,000 per QALY gained compared to a moderate sensitivity strategy (sensitivity 71.2%, specificity 66.5%). Refining EMS transport patterns by triage status improved cost-effectiveness. At the trauma system level, a high-sensitivity triage strategy would save 3.7 additional lives per year at a 1-year cost of $8.78 million, while a moderate sensitivity approach would cost 5.2 additional lives and save $781,616 each year. Conclusions A high-sensitivity approach to field triage consistent with national trauma policy is not cost effective. The most cost effective approach to field triage appears closely tied to triage specificity and adherence to triage-based EMS transport practices. PMID:27178369
Groff, Shannon C.; Loftin, Cynthia S.; Drummond, Frank; Bushmann, Sara; McGill, Brian J.
2016-01-01
Non-native honeybees historically have been managed for crop pollination, however, recent population declines draw attention to pollination services provided by native bees. We applied the InVEST Crop Pollination model, developed to predict native bee abundance from habitat resources, in Maine's wild blueberry crop landscape. We evaluated model performance with parameters informed by four approaches: 1) expert opinion; 2) sensitivity analysis; 3) sensitivity analysis informed model optimization; and, 4) simulated annealing (uninformed) model optimization. Uninformed optimization improved model performance by 29% compared to expert opinion-informed model, while sensitivity-analysis informed optimization improved model performance by 54%. This suggests that expert opinion may not result in the best parameter values for the InVEST model. The proportion of deciduous/mixed forest within 2000 m of a blueberry field also reliably predicted native bee abundance in blueberry fields, however, the InVEST model provides an efficient tool to estimate bee abundance beyond the field perimeter.
Advances in ultrasensitive mass spectrometry of organic molecules.
Kandiah, Mathivathani; Urban, Pawel L
2013-06-21
Ultrasensitive mass spectrometric analysis of organic molecules is important for various branches of chemistry, and other fields including physics, earth and environmental sciences, archaeology, biomedicine, and materials science. It finds applications--as an enabling tool--in systems biology, biological imaging, clinical analysis, and forensics. Although there are a number of technical obstacles associated with the analysis of samples by mass spectrometry at ultratrace level (for example analyte losses during sample preparation, insufficient sensitivity, ion suppression), several noteworthy developments have been made over the years. They include: sensitive ion sources, loss-free interfaces, ion optics components, efficient mass analyzers and detectors, as well as "smart" sample preparation strategies. Some of the mass spectrometric methods published to date can achieve sensitivity which is by several orders of magnitude higher than that of alternative approaches. Femto- and attomole level limits of detection are nowadays common, while zepto- and yoctomole level limits of detection have also been reported. We envision that the ultrasensitive mass spectrometric assays will soon contribute to new discoveries in bioscience and other areas.
Kim, Won Hwa; Singh, Vikas; Chung, Moo K.; Hinrichs, Chris; Pachauri, Deepti; Okonkwo, Ozioma C.; Johnson, Sterling C.
2014-01-01
Statistical analysis on arbitrary surface meshes such as the cortical surface is an important approach to understanding brain diseases such as Alzheimer’s disease (AD). Surface analysis may be able to identify specific cortical patterns that relate to certain disease characteristics or exhibit differences between groups. Our goal in this paper is to make group analysis of signals on surfaces more sensitive. To do this, we derive multi-scale shape descriptors that characterize the signal around each mesh vertex, i.e., its local context, at varying levels of resolution. In order to define such a shape descriptor, we make use of recent results from harmonic analysis that extend traditional continuous wavelet theory from the Euclidean to a non-Euclidean setting (i.e., a graph, mesh or network). Using this descriptor, we conduct experiments on two different datasets, the Alzheimer’s Disease NeuroImaging Initiative (ADNI) data and images acquired at the Wisconsin Alzheimer’s Disease Research Center (W-ADRC), focusing on individuals labeled as having Alzheimer’s disease (AD), mild cognitive impairment (MCI) and healthy controls. In particular, we contrast traditional univariate methods with our multi-resolution approach which show increased sensitivity and improved statistical power to detect a group-level effects. We also provide an open source implementation. PMID:24614060
Breathing dynamics based parameter sensitivity analysis of hetero-polymeric DNA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talukder, Srijeeta; Sen, Shrabani; Chaudhury, Pinaki, E-mail: pinakc@rediffmail.com
We study the parameter sensitivity of hetero-polymeric DNA within the purview of DNA breathing dynamics. The degree of correlation between the mean bubble size and the model parameters is estimated for this purpose for three different DNA sequences. The analysis leads us to a better understanding of the sequence dependent nature of the breathing dynamics of hetero-polymeric DNA. Out of the 14 model parameters for DNA stability in the statistical Poland-Scheraga approach, the hydrogen bond interaction ε{sub hb}(AT) for an AT base pair and the ring factor ξ turn out to be the most sensitive parameters. In addition, the stackingmore » interaction ε{sub st}(TA-TA) for an TA-TA nearest neighbor pair of base-pairs is found to be the most sensitive one among all stacking interactions. Moreover, we also establish that the nature of stacking interaction has a deciding effect on the DNA breathing dynamics, not the number of times a particular stacking interaction appears in a sequence. We show that the sensitivity analysis can be used as an effective measure to guide a stochastic optimization technique to find the kinetic rate constants related to the dynamics as opposed to the case where the rate constants are measured using the conventional unbiased way of optimization.« less
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi
2016-06-01
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.
NASA Astrophysics Data System (ADS)
Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.
2002-05-01
Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more information by quantifying the relative importance of each input parameter in predicting the model response. However, in these complex, high dimensional eco-system models, represented by the RWMS model, the dynamics of the systems can act in a non-linear manner. Quantitatively assessing the importance of input variables becomes more difficult as the dimensionality, the non-linearities, and the non-monotonicities of the model increase. Methods from data mining such as Multivariate Adaptive Regression Splines (MARS) and the Fourier Amplitude Sensitivity Test (FAST) provide tools that can be used in global sensitivity analysis in these high dimensional, non-linear situations. The enhanced interpretability of model output provided by the quantitative measures estimated by these global sensitivity analysis tools will be demonstrated using the RWMS model.
Surrogate-based Analysis and Optimization
NASA Technical Reports Server (NTRS)
Queipo, Nestor V.; Haftka, Raphael T.; Shyy, Wei; Goel, Tushar; Vaidyanathan, Raj; Tucker, P. Kevin
2005-01-01
A major challenge to the successful full-scale development of modem aerospace systems is to address competing objectives such as improved performance, reduced costs, and enhanced safety. Accurate, high-fidelity models are typically time consuming and computationally expensive. Furthermore, informed decisions should be made with an understanding of the impact (global sensitivity) of the design variables on the different objectives. In this context, the so-called surrogate-based approach for analysis and optimization can play a very valuable role. The surrogates are constructed using data drawn from high-fidelity models, and provide fast approximations of the objectives and constraints at new design points, thereby making sensitivity and optimization studies feasible. This paper provides a comprehensive discussion of the fundamental issues that arise in surrogate-based analysis and optimization (SBAO), highlighting concepts, methods, techniques, as well as practical implications. The issues addressed include the selection of the loss function and regularization criteria for constructing the surrogates, design of experiments, surrogate selection and construction, sensitivity analysis, convergence, and optimization. The multi-objective optimal design of a liquid rocket injector is presented to highlight the state of the art and to help guide future efforts.
NASA Astrophysics Data System (ADS)
Lachhwani, Kailash; Poonia, Mahaveer Prasad
2012-08-01
In this paper, we show a procedure for solving multilevel fractional programming problems in a large hierarchical decentralized organization using fuzzy goal programming approach. In the proposed method, the tolerance membership functions for the fuzzily described numerator and denominator part of the objective functions of all levels as well as the control vectors of the higher level decision makers are respectively defined by determining individual optimal solutions of each of the level decision makers. A possible relaxation of the higher level decision is considered for avoiding decision deadlock due to the conflicting nature of objective functions. Then, fuzzy goal programming approach is used for achieving the highest degree of each of the membership goal by minimizing negative deviational variables. We also provide sensitivity analysis with variation of tolerance values on decision vectors to show how the solution is sensitive to the change of tolerance values with the help of a numerical example.
A learning framework for age rank estimation based on face images with scattering transform.
Chang, Kuang-Yu; Chen, Chu-Song
2015-03-01
This paper presents a cost-sensitive ordinal hyperplanes ranking algorithm for human age estimation based on face images. The proposed approach exploits relative-order information among the age labels for rank prediction. In our approach, the age rank is obtained by aggregating a series of binary classification results, where cost sensitivities among the labels are introduced to improve the aggregating performance. In addition, we give a theoretical analysis on designing the cost of individual binary classifier so that the misranking cost can be bounded by the total misclassification costs. An efficient descriptor, scattering transform, which scatters the Gabor coefficients and pooled with Gaussian smoothing in multiple layers, is evaluated for facial feature extraction. We show that this descriptor is a generalization of conventional bioinspired features and is more effective for face-based age inference. Experimental results demonstrate that our method outperforms the state-of-the-art age estimation approaches.
Functional organization of the face-sensitive areas in human occipital-temporal cortex.
Shao, Hanyu; Weng, Xuchu; He, Sheng
2017-08-15
Human occipital-temporal cortex features several areas sensitive to faces, presumably forming the biological substrate for face perception. To date, there are piecemeal insights regarding the functional organization of these regions. They have come, however, from studies that are far from homogeneous with regard to the regions involved, the experimental design, and the data analysis approach. In order to provide an overall view of the functional organization of the face-sensitive areas, it is necessary to conduct a comprehensive study that taps into the pivotal functional properties of all the face-sensitive areas, within the context of the same experimental design, and uses multiple data analysis approaches. In this study, we identified the most robustly activated face-sensitive areas in bilateral occipital-temporal cortices (i.e., AFP, aFFA, pFFA, OFA, pcSTS, pSTS) and systemically compared their regionally averaged activation and multivoxel activation patterns to 96 images from 16 object categories, including faces and non-faces. This condition-rich and single-image analysis approach critically samples the functional properties of a brain region, allowing us to test how two basic functional properties, namely face-category selectivity and face-exemplar sensitivity are distributed among these regions. Moreover, by examining the correlational structure of neural responses to the 96 images, we characterize their interactions in the greater face-processing network. We found that (1) r-pFFA showed the highest face-category selectivity, followed by l-pFFA, bilateral aFFA and OFA, and then bilateral pcSTS. In contrast, bilateral AFP and pSTS showed low face-category selectivity; (2) l-aFFA, l-pcSTS and bilateral AFP showed evidence of face-exemplar sensitivity; (3) r-OFA showed high overall response similarities with bilateral LOC and r-pFFA, suggesting it might be a transitional stage between general and face-selective information processing; (4) r-aFFA showed high face-selective response similarity with r-pFFA and r-OFA, indicating it was specifically involved in processing face information. Results also reveal two properties of these face sensitive regions across the two hemispheres: (1) the averaged left intra-hemispheric response similarity for the images was lower than the averaged right intra-hemispheric and the inter-hemispheric response similarity, implying convergence of face processing towards the right hemisphere, and (2) the response similarities between homologous regions in the two hemispheres decreased as information processing proceeded from the early, more posterior, processing stage (OFA), indicating an increasing degree of hemispheric specialization and right hemisphere bias for face information processing. This study contributes to an emerging picture of how faces are processed within the occipital and temporal cortex. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Price J. M.; Ortega, R.
1998-01-01
Probabilistic method is not a universally accepted approach for the design and analysis of aerospace structures. The validity of this approach must be demonstrated to encourage its acceptance as it viable design and analysis tool to estimate structural reliability. The objective of this Study is to develop a well characterized finite population of similar aerospace structures that can be used to (1) validate probabilistic codes, (2) demonstrate the basic principles behind probabilistic methods, (3) formulate general guidelines for characterization of material drivers (such as elastic modulus) when limited data is available, and (4) investigate how the drivers affect the results of sensitivity analysis at the component/failure mode level.
NASA Astrophysics Data System (ADS)
Sakata, Kenichi
Aplasma-interface is considered the most mysterious part of an inductively coupled plasma mass spectrometer system in terms of understanding its operational mechanism. After a brief explanation of the basic structure of the inductively coupled plasma mass spectrometer and how it works, the plasma-interface is discussed in regard to its complex operation and approaches to investigating its behavior. In particular, the position and shape of the plasma boundary seem to be important to understand the instrument's sensitivity.
Genomic Methods for Clinical and Translational Pain Research
Wang, Dan; Kim, Hyungsuk; Wang, Xiao-Min; Dionne, Raymond
2012-01-01
Pain is a complex sensory experience for which the molecular mechanisms are yet to be fully elucidated. Individual differences in pain sensitivity are mediated by a complex network of multiple gene polymorphisms, physiological and psychological processes, and environmental factors. Here, we present the methods for applying unbiased molecular-genetic approaches, genome-wide association study (GWAS), and global gene expression analysis, to help better understand the molecular basis of pain sensitivity in humans and variable responses to analgesic drugs. PMID:22351080
Approaching attometer laser vibrometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rembe, Christian; Kadner, Lisa; Giesen, Moritz
2014-05-27
The heterodyne two-beam interferometer has been proven to be the optimal solution for laser-Doppler vibrometry regarding accuracy and signal robustness. The theoretical resolution limit for a two-beam interferometer of laser class 3R (up to 5 mW visible measurement-light) is in the regime of a few femtometer per square-root Hertz and well suited to study vibrations in microstructures. However, some new applications of RF-MEM resonators, nanostructures, and surface-nano-defect detection require resolutions beyond that limit. The resolution depends only on the noise and the sensor sensitivity to specimen displacements. The noise is already defined in nowadays systems by the quantum nature ofmore » light for a properly designed optical sensor and more light would lead to an inacceptable influence like heating of a very tiny structure. Thus, noise can only be improved by squeezed-light techniques which require a negligible loss of measurement light which is impossible for almost all technical measurement tasks. Thus, improving the sensitivity is the only possible path which could make attometer laser vibrometry possible. Decreasing the measurement wavelength would increase the sensitivity but would also increase the photon shot noise. In this paper, we discuss an approach to increase the sensitivity by assembling an additional mirror between interferometer and specimen to form an optical cavity. A detailed theoretical analysis of this setup is presented and we derive the resolution limit, discuss the main contributions to the uncertainty budget, and show a first experiment proving the sensitivity amplification of our approach.« less
Roman, Yblin E; De Schamphelaere, Karel A C; Nguyen, Lien T H; Janssen, Colin R
2007-11-15
Five benthic organisms commonly used for sediment toxicity testing were chronically (28 to 35 days) exposed to copper in standard laboratory-formulated sediment (following Organization for Economic Cooperation and Development guidelines) and lethal and sub-lethal toxicities were evaluated. Sub-lethal endpoints considered were reproduction and biomass production for Lumbriculus variegatus, growth and reproduction for Tubifex tubifex, growth and emergence for Chironomus riparius, and growth for Gammarus pulex and Hyalella azteca. Expressed on whole-sediment basis the observed lethal sensitivity ranking (from most to least sensitive) was: G. pulex>L. variegatus>H. azteca=C. riparius=T. tubifex, with median chronic lethal concentrations (LC50) between 151 and 327 mg/kg dry wt. The sub-lethal sensitivity ranking (from most to least sensitive, with the most sensitive endpoint between parentheses): C. riparius (emergence)>T. tubifex (reproduction)=L. variegatus (reproduction)>G. pulex (growth)>H. azteca (growth), with median effective concentrations (EC50) between 59.2 and 194 mg/kg dry wt. No observed effect concentrations (NOEC) or 10% effective concentrations (EC10) for the five benthic invertebrates were used to perform a preliminary risk assessment for copper in freshwater sediment by means of (a) the "assessment factor approach" or (b) the statistical extrapolation approach (species sensitivity distribution). Depending on the data (NOEC or EC10) and the methodology used, we calculated a Predicted No Effect Concentration (PNEC) for sediment between 3.3 and 47.1 mg Cu/dry wt. This range is similar to the range of natural (geochemical) background concentrations of copper in sediments in Europe, i.e. 90% of sediments have a concentration between 5 and 49 mg Cu/kg dry wt. A detailed analysis of the outcome of this preliminary exercise highlighted that multiple issues need to be explored for achieving a scientifically more sound risk assessment and for the development of robust sediment quality criteria for copper, including (i) the use of the assessment factor approach vs. the statistical extrapolation approach, (ii) the importance of bioavailability modifying factors (e.g., organic carbon, acid volatile sulfide), and (iii) the influence of prevailing geochemical (bioavailable) background concentrations on the copper sensitivity of local benthic biota.
Krolewiecki, Alejandro J; Koukounari, Artemis; Romano, Miryam; Caro, Reynaldo N; Scott, Alan L; Fleitas, Pedro; Cimino, Ruben; Shiff, Clive J
2018-06-01
For epidemiological work with soil transmitted helminths the recommended diagnostic approaches are to examine fecal samples for microscopic evidence of the parasite. In addition to several logistical and processing issues, traditional diagnostic approaches have been shown to lack the sensitivity required to reliably identify patients harboring low-level infections such as those associated with effective mass drug intervention programs. In this context, there is a need to rethink the approaches used for helminth diagnostics. Serological methods are now in use, however these tests are indirect and depend on individual immune responses, exposure patterns and the nature of the antigen. However, it has been demonstrated that cell-free DNA from pathogens and cancers can be readily detected in patient's urine which can be collected in the field, filtered in situ and processed later for analysis. In the work presented here, we employ three diagnostic procedures-stool examination, serology (NIE-ELISA) and PCR-based amplification of parasite transrenal DNA from urine-to determine their relative utility in the diagnosis of S. stercoralis infections from 359 field samples from an endemic area of Argentina. Bayesian Latent Class analysis was used to assess the relative performance of the three diagnostic procedures. The results underscore the low sensitivity of stool examination and support the idea that the use of serology combined with parasite transrenal DNA detection may be a useful strategy for sensitive and specific detection of low-level strongyloidiasis.
Han, Daehoon; Hong, Jinkee; Kim, Hyun Cheol; Sung, Jong Hwan; Lee, Jong Bum
2013-11-01
Many highly sensitive protein detection techniques have been developed and have played an important role in the analysis of proteins. Herein, we report a novel technique that can detect proteins sensitively and effectively using aptamer-based DNA nanostructures. Thrombin was used as a target protein and aptamer was used to capture fluorescent dye-labeled DNA nanobarcodes or thrombin on a microsphere. The captured DNA nanobarcodes were replaced by a thrombin and aptamer interaction. The detection ability of this approach was confirmed by flow cytometry with different concentrations of thrombin. Our detection method has great potential for rapid and simple protein detection with a variety of aptamers.
Receiver operating characteristic analysis of age-related changes in lineup performance.
Humphries, Joyce E; Flowe, Heather D
2015-04-01
In the basic face memory literature, support has been found for the late maturation hypothesis, which holds that face recognition ability is not fully developed until at least adolescence. Support for the late maturation hypothesis in the criminal lineup identification literature, however, has been equivocal because of the analytic approach that has been used to examine age-related changes in identification performance. Recently, receiver operator characteristic (ROC) analysis was applied for the first time in the adult eyewitness memory literature to examine whether memory sensitivity differs across different types of lineup tests. ROC analysis allows for the separation of memory sensitivity from response bias in the analysis of recognition data. Here, we have made the first ROC-based comparison of adults' and children's (5- and 6-year-olds and 9- and 10-year-olds) memory performance on lineups by reanalyzing data from Humphries, Holliday, and Flowe (2012). In line with the late maturation hypothesis, memory sensitivity was significantly greater for adults compared with young children. Memory sensitivity for older children was similar to that for adults. The results indicate that the late maturation hypothesis can be generalized to account for age-related performance differences on an eyewitness memory task. The implications for developmental eyewitness memory research are discussed. Copyright © 2014 Elsevier Inc. All rights reserved.
Mechelke, Matthias; Herlet, Jonathan; Benz, J Philipp; Schwarz, Wolfgang H; Zverlov, Vladimir V; Liebl, Wolfgang; Kornberger, Petra
2017-12-01
The rising importance of accurately detecting oligosaccharides in biomass hydrolyzates or as ingredients in food, such as in beverages and infant milk products, demands for the availability of tools to sensitively analyze the broad range of available oligosaccharides. Over the last decades, HPAEC-PAD has been developed into one of the major technologies for this task and represents a popular alternative to state-of-the-art LC-MS oligosaccharide analysis. This work presents the first comprehensive study which gives an overview of the separation of 38 analytes as well as enzymatic hydrolyzates of six different polysaccharides focusing on oligosaccharides. The high sensitivity of the PAD comes at cost of its stability due to recession of the gold electrode. By an in-depth analysis of the sensitivity drop over time for 35 analytes, including xylo- (XOS), arabinoxylo- (AXOS), laminari- (LOS), manno- (MOS), glucomanno- (GMOS), and cellooligosaccharides (COS), we developed an analyte-specific one-phase decay model for this effect over time. Using this model resulted in significantly improved data normalization when using an internal standard. Our results thereby allow a quantification approach which takes the inevitable and analyte-specific PAD response drop into account. Graphical abstract HPAEC-PAD analysis of oligosaccharides and determination of PAD response drop leading to an improved data normalization.
Michaelis, Marc; Leopold, Claudia S
2015-12-30
The tack of a pressure sensitive adhesive (PSA) is not an inherent material property and strongly depends on the measurement conditions. Following the concept of a measurement system analysis (MSA), influencing factors of the probe tack test were investigated by a design of experiments (DoE) approach. A response surface design with 38 runs was built to evaluate the influence of detachment speed, dwell time, contact force, adhesive film thickness and API content on tack, determined as the maximum of the stress strain curve (σmax). It could be shown that all investigated factors have a significant effect on the response and that the DoE approach allowed to detect two-factorial interactions between the dwell time, the contact force, the adhesive film thickness and the API content. Surprisingly, it was found that tack increases with decreasing and not with increasing adhesive film thickness. Copyright © 2015. Published by Elsevier B.V.
Ciccimaro, Eugene; Ranasinghe, Asoka; D'Arienzo, Celia; Xu, Carrie; Onorato, Joelle; Drexler, Dieter M; Josephs, Jonathan L; Poss, Michael; Olah, Timothy
2014-12-02
Due to observed collision induced dissociation (CID) fragmentation inefficiency, developing sensitive liquid chromatography tandem mass spectrometry (LC-MS/MS) assays for CID resistant compounds is especially challenging. As an alternative to traditional LC-MS/MS, we present here a methodology that preserves the intact analyte ion for quantification by selectively filtering ions while reducing chemical noise. Utilizing a quadrupole-Orbitrap MS, the target ion is selectively isolated while interfering matrix components undergo MS/MS fragmentation by CID, allowing noise-free detection of the analyte's surviving molecular ion. In this manner, CID affords additional selectivity during high resolution accurate mass analysis by elimination of isobaric interferences, a fundamentally different concept than the traditional approach of monitoring a target analyte's unique fragment following CID. This survivor-selected ion monitoring (survivor-SIM) approach has allowed sensitive and specific detection of disulfide-rich cyclic peptides extracted from plasma.
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Georgios, E-mail: garab@math.uoc.gr; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003; Katsoulakis, Markos A., E-mail: markos@math.umass.edu
2014-03-28
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that themore » new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.« less
Computer-Aided Design/Manufacturing (CAD/M) for High-Speed Interconnect.
1981-10-01
are frequency sensitive and hence lend themselves to frequency domain ananlysis . Most of the classical microwave analysis is handled in the frequency ...capability integrated into a time-domain analysis program. This approach allows determination of frequency -dependent transmission line (interconnect...the items to consider in any interconnect study is that of the frequency range of interest. This determines whether the interconnections must be treated
Mosely, Jackie A; Stokes, Peter; Parker, David; Dyer, Philip W; Messinis, Antonis M
2018-02-01
A novel method has been developed that enables chemical compounds to be transferred from an inert atmosphere glove box and into the atmospheric pressure ion source of a mass spectrometer whilst retaining a controlled chemical environment. This innovative method is simple and cheap to implement on some commercially available mass spectrometers. We have termed this approach inert atmospheric pressure solids analysis probe ( iASAP) and demonstrate the benefit of this methodology for two air-/moisture-sensitive chemical compounds whose characterisation by mass spectrometry is now possible and easily achieved. The simplicity of the design means that moving between iASAP and standard ASAP is straightforward and quick, providing a highly flexible platform with rapid sample turnaround.
NASA Technical Reports Server (NTRS)
Greene, William H.
1990-01-01
A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.
Probabilistic sensitivity analysis for NICE technology assessment: not an optional extra.
Claxton, Karl; Sculpher, Mark; McCabe, Chris; Briggs, Andrew; Akehurst, Ron; Buxton, Martin; Brazier, John; O'Hagan, Tony
2005-04-01
Recently the National Institute for Clinical Excellence (NICE) updated its methods guidance for technology assessment. One aspect of the new guidance is to require the use of probabilistic sensitivity analysis with all cost-effectiveness models submitted to the Institute. The purpose of this paper is to place the NICE guidance on dealing with uncertainty into a broader context of the requirements for decision making; to explain the general approach that was taken in its development; and to address each of the issues which have been raised in the debate about the role of probabilistic sensitivity analysis in general. The most appropriate starting point for developing guidance is to establish what is required for decision making. On the basis of these requirements, the methods and framework of analysis which can best meet these needs can then be identified. It will be argued that the guidance on dealing with uncertainty and, in particular, the requirement for probabilistic sensitivity analysis, is justified by the requirements of the type of decisions that NICE is asked to make. Given this foundation, the main issues and criticisms raised during and after the consultation process are reviewed. Finally, some of the methodological challenges posed by the need fully to characterise decision uncertainty and to inform the research agenda will be identified and discussed. Copyright (c) 2005 John Wiley & Sons, Ltd.
Advancing Clinical Proteomics via Analysis Based on Biological Complexes: A Tale of Five Paradigms.
Goh, Wilson Wen Bin; Wong, Limsoon
2016-09-02
Despite advances in proteomic technologies, idiosyncratic data issues, for example, incomplete coverage and inconsistency, resulting in large data holes, persist. Moreover, because of naïve reliance on statistical testing and its accompanying p values, differential protein signatures identified from such proteomics data have little diagnostic power. Thus, deploying conventional analytics on proteomics data is insufficient for identifying novel drug targets or precise yet sensitive biomarkers. Complex-based analysis is a new analytical approach that has potential to resolve these issues but requires formalization. We categorize complex-based analysis into five method classes or paradigms and propose an even-handed yet comprehensive evaluation rubric based on both simulated and real data. The first four paradigms are well represented in the literature. The fifth and newest paradigm, the network-paired (NP) paradigm, represented by a method called Extremely Small SubNET (ESSNET), dominates in precision-recall and reproducibility, maintains strong performance in small sample sizes, and sensitively detects low-abundance complexes. In contrast, the commonly used over-representation analysis (ORA) and direct-group (DG) test paradigms maintain good overall precision but have severe reproducibility issues. The other two paradigms considered here are the hit-rate and rank-based network analysis paradigms; both of these have good precision-recall and reproducibility, but they do not consider low-abundance complexes. Therefore, given its strong performance, NP/ESSNET may prove to be a useful approach for improving the analytical resolution of proteomics data. Additionally, given its stability, it may also be a powerful new approach toward functional enrichment tests, much like its ORA and DG counterparts.
Efficient Analysis of Complex Structures
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.
2000-01-01
Last various accomplishments achieved during this project are : (1) A Survey of Neural Network (NN) applications using MATLAB NN Toolbox on structural engineering especially on equivalent continuum models (Appendix A). (2) Application of NN and GAs to simulate and synthesize substructures: 1-D and 2-D beam problems (Appendix B). (3) Development of an equivalent plate-model analysis method (EPA) for static and vibration analysis of general trapezoidal built-up wing structures composed of skins, spars and ribs. Calculation of all sorts of test cases and comparison with measurements or FEA results. (Appendix C). (4) Basic work on using second order sensitivities on simulating wing modal response, discussion of sensitivity evaluation approaches, and some results (Appendix D). (5) Establishing a general methodology of simulating the modal responses by direct application of NN and by sensitivity techniques, in a design space composed of a number of design points. Comparison is made through examples using these two methods (Appendix E). (6) Establishing a general methodology of efficient analysis of complex wing structures by indirect application of NN: the NN-aided Equivalent Plate Analysis. Training of the Neural Networks for this purpose in several cases of design spaces, which can be applicable for actual design of complex wings (Appendix F).
Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks
Kaltenbacher, Barbara; Hasenauer, Jan
2017-01-01
Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351
NASA Astrophysics Data System (ADS)
Oliveira, Miguel; Santos, Cristina P.; Costa, Lino
2012-09-01
In this paper, a study based on sensitivity analysis is performed for a gait multi-objective optimization system that combines bio-inspired Central Patterns Generators (CPGs) and a multi-objective evolutionary algorithm based on NSGA-II. In this system, CPGs are modeled as autonomous differential equations, that generate the necessary limb movement to perform the required walking gait. In order to optimize the walking gait, a multi-objective problem with three conflicting objectives is formulated: maximization of the velocity, the wide stability margin and the behavioral diversity. The experimental results highlight the effectiveness of this multi-objective approach and the importance of the objectives to find different walking gait solutions for the quadruped robot.
Trigueros, José Antonio; Piñero, David P; Ismail, Mahmoud M
2016-01-01
To define the financial and management conditions required to introduce a femtosecond laser system for cataract surgery in a clinic using a fuzzy logic approach. In the simulation performed in the current study, the costs associated to the acquisition and use of a commercially available femtosecond laser platform for cataract surgery (VICTUS, TECHNOLAS Perfect Vision GmbH, Bausch & Lomb, Munich, Germany) during a period of 5y were considered. A sensitivity analysis was performed considering such costs and the countable amortization of the system during this 5y period. Furthermore, a fuzzy logic analysis was used to obtain an estimation of the money income associated to each femtosecond laser-assisted cataract surgery (G). According to the sensitivity analysis, the femtosecond laser system under evaluation can be profitable if 1400 cataract surgeries are performed per year and if each surgery can be invoiced more than $500. In contrast, the fuzzy logic analysis confirmed that the patient had to pay more per surgery, between $661.8 and $667.4 per surgery, without considering the cost of the intraocular lens (IOL). A profitability of femtosecond laser systems for cataract surgery can be obtained after a detailed financial analysis, especially in those centers with large volumes of patients. The cost of the surgery for patients should be adapted to the real flow of patients with the ability of paying a reasonable range of cost.
Sensitivity Analysis and Optimization of Enclosure Radiation with Applications to Crystal Growth
NASA Technical Reports Server (NTRS)
Tiller, Michael M.
1995-01-01
In engineering, simulation software is often used as a convenient means for carrying out experiments to evaluate physical systems. The benefit of using simulations as 'numerical' experiments is that the experimental conditions can be easily modified and repeated at much lower cost than the comparable physical experiment. The goal of these experiments is to 'improve' the process or result of the experiment. In most cases, the computational experiments employ the same trial and error approach as their physical counterparts. When using this approach for complex systems, the cause and effect relationship of the system may never be fully understood and efficient strategies for improvement never utilized. However, it is possible when running simulations to accurately and efficiently determine the sensitivity of the system results with respect to simulation to accurately and efficiently determine the sensitivity of the system results with respect to simulation parameters (e.g., initial conditions, boundary conditions, and material properties) by manipulating the underlying computations. This results in a better understanding of the system dynamics and gives us efficient means to improve processing conditions. We begin by discussing the steps involved in performing simulations. Then we consider how sensitivity information about simulation results can be obtained and ways this information may be used to improve the process or result of the experiment. Next, we discuss optimization and the efficient algorithms which use sensitivity information. We draw on all this information to propose a generalized approach for integrating simulation and optimization, with an emphasis on software programming issues. After discussing our approach to simulation and optimization we consider an application involving crystal growth. This application is interesting because it includes radiative heat transfer. We discuss the computation of radiative new factors and the impact this mode of heat transfer has on our approach. Finally, we will demonstrate the results of our optimization.
Is probabilistic bias analysis approximately Bayesian?
MacLehose, Richard F.; Gustafson, Paul
2011-01-01
Case-control studies are particularly susceptible to differential exposure misclassification when exposure status is determined following incident case status. Probabilistic bias analysis methods have been developed as ways to adjust standard effect estimates based on the sensitivity and specificity of exposure misclassification. The iterative sampling method advocated in probabilistic bias analysis bears a distinct resemblance to a Bayesian adjustment; however, it is not identical. Furthermore, without a formal theoretical framework (Bayesian or frequentist), the results of a probabilistic bias analysis remain somewhat difficult to interpret. We describe, both theoretically and empirically, the extent to which probabilistic bias analysis can be viewed as approximately Bayesian. While the differences between probabilistic bias analysis and Bayesian approaches to misclassification can be substantial, these situations often involve unrealistic prior specifications and are relatively easy to detect. Outside of these special cases, probabilistic bias analysis and Bayesian approaches to exposure misclassification in case-control studies appear to perform equally well. PMID:22157311
NASA Astrophysics Data System (ADS)
Boschetto, Davide; Di Claudio, Gianluca; Mirzaei, Hadis; Leong, Rupert; Grisan, Enrico
2016-03-01
Celiac disease (CD) is an immune-mediated enteropathy triggered by exposure to gluten and similar proteins, affecting genetically susceptible persons, increasing their risk of different complications. Small bowels mucosa damage due to CD involves various degrees of endoscopically relevant lesions, which are not easily recognized: their overall sensitivity and positive predictive values are poor even when zoom-endoscopy is used. Confocal Laser Endomicroscopy (CLE) allows skilled and trained experts to qualitative evaluate mucosa alteration such as a decrease in goblet cells density, presence of villous atrophy or crypt hypertrophy. We present a method for automatically classifying CLE images into three different classes: normal regions, villous atrophy and crypt hypertrophy. This classification is performed after a features selection process, in which four features are extracted from each image, through the application of homomorphic filtering and border identification through Canny and Sobel operators. Three different classifiers have been tested on a dataset of 67 different images labeled by experts in three classes (normal, VA and CH): linear approach, Naive-Bayes quadratic approach and a standard quadratic analysis, all validated with a ten-fold cross validation. Linear classification achieves 82.09% accuracy (class accuracies: 90.32% for normal villi, 82.35% for VA and 68.42% for CH, sensitivity: 0.68, specificity 1.00), Naive Bayes analysis returns 83.58% accuracy (90.32% for normal villi, 70.59% for VA and 84.21% for CH, sensitivity: 0.84 specificity: 0.92), while the quadratic analysis achieves a final accuracy of 94.03% (96.77% accuracy for normal villi, 94.12% for VA and 89.47% for CH, sensitivity: 0.89, specificity: 0.98).
Local connected fractal dimension analysis in gill of fish experimentally exposed to toxicants.
Manera, Maurizio; Giari, Luisa; De Pasquale, Joseph A; Sayyaf Dezfuli, Bahram
2016-06-01
An operator-neutral method was implemented to objectively assess European seabass, Dicentrarchus labrax (Linnaeus, 1758) gill pathology after experimental exposure to cadmium (Cd) and terbuthylazine (TBA) for 24 and 48h. An algorithm-derived local connected fractal dimension (LCFD) frequency measure was used in this comparative analysis. Canonical variates (CVA) and linear discriminant analysis (LDA) were used to evaluate the discrimination power of the method among exposure classes (unexposed, Cd exposed, TBA exposed). Misclassification, sensitivity and specificity, both with original and cross-validated cases, were determined. LCFDs frequencies enhanced the differences among classes which were visually selected after their means, respective variances and the differences between Cd and TBA exposed means, with respect to unexposed mean, were analyzed by scatter plots. Selected frequencies were then scanned by means of LDA, stepwise analysis, and Mahalanobis distance to detect the most discriminative frequencies out of ten originally selected. Discrimination resulted in 91.7% of cross-validated cases correctly classified (22 out of 24 total cases), with sensitivity and specificity, respectively, of 95.5% (1 false negative with respect to 21 really positive cases) and 75% (1 false positive with respect to 3 really negative cases). CVA with convex hull polygons ensured prompt, visually intuitive discrimination among exposure classes and graphically supported the false positive case. The combined use of semithin sections, which enhanced the visual evaluation of the overall lamellar structure; of LCFD analysis, which objectively detected local variation in complexity, without the possible bias connected to human personnel; and of CVA/LDA, could be an objective, sensitive and specific approach to study fish gill lamellar pathology. Furthermore this approach enabled discrimination with sufficient confidence between exposure classes or pathological states and avoided misdiagnosis. Copyright © 2016 Elsevier B.V. All rights reserved.
Optical modeling of waveguide coupled TES detectors towards the SAFARI instrument for SPICA
NASA Astrophysics Data System (ADS)
Trappe, N.; Bracken, C.; Doherty, S.; Gao, J. R.; Glowacka, D.; Goldie, D.; Griffin, D.; Hijmering, R.; Jackson, B.; Khosropanah, P.; Mauskopf, P.; Morozov, D.; Murphy, A.; O'Sullivan, C.; Ridder, M.; Withington, S.
2012-09-01
The next generation of space missions targeting far-infrared wavelengths will require large-format arrays of extremely sensitive detectors. The development of Transition Edge Sensor (TES) array technology is being developed for future Far-Infrared (FIR) space applications such as the SAFARI instrument for SPICA where low-noise and high sensitivity is required to achieve ambitious science goals. In this paper we describe a modal analysis of multi-moded horn antennas feeding integrating cavities housing TES detectors with superconducting film absorbers. In high sensitivity TES detector technology the ability to control the electromagnetic and thermo-mechanical environment of the detector is critical. Simulating and understanding optical behaviour of such detectors at far IR wavelengths is difficult and requires development of existing analysis tools. The proposed modal approach offers a computationally efficient technique to describe the partial coherent response of the full pixel in terms of optical efficiency and power leakage between pixels. Initial wok carried out as part of an ESA technical research project on optical analysis is described and a prototype SAFARI pixel design is analyzed where the optical coupling between the incoming field and the pixel containing horn, cavity with an air gap, and thin absorber layer are all included in the model to allow a comprehensive optical characterization. The modal approach described is based on the mode matching technique where the horn and cavity are described in the traditional way while a technique to include the absorber was developed. Radiation leakage between pixels is also included making this a powerful analysis tool.
Takeyoshi, Masahiro; Sawaki, Masakuni; Yamasaki, Kanji; Kimber, Ian
2003-09-30
The murine local lymph node assay (LLNA) is used for the identification of chemicals that have the potential to cause skin sensitization. However, it requires specific facility and handling procedures to accommodate a radioisotopic (RI) endpoint. We have developed non-radioisotopic (non-RI) endpoint of LLNA based on BrdU incorporation to avoid a use of RI. Although this alternative method appears viable in principle, it is somewhat less sensitive than the standard assay. In this study, we report investigations to determine the use of statistical analysis to improve the sensitivity of a non-RI LLNA procedure with alpha-hexylcinnamic aldehyde (HCA) in two separate experiments. Consequently, the alternative non-RI method required HCA concentrations of greater than 25% to elicit a positive response based on the criterion for classification as a skin sensitizer in the standard LLNA. Nevertheless, dose responses to HCA in the alternative method were consistent in both experiments and we examined whether the use of an endpoint based upon the statistical significance of induced changes in LNC turnover, rather than an SI of 3 or greater, might provide for additional sensitivity. The results reported here demonstrate that with HCA at least significant responses were, in each of two experiments, recorded following exposure of mice to 25% of HCA. These data suggest that this approach may be more satisfactory-at least when BrdU incorporation is measured. However, this modification of the LLNA is rather less sensitive than the standard method if employing statistical endpoint. Taken together the data reported here suggest that a modified LLNA in which BrdU is used in place of radioisotope incorporation shows some promise, but that in its present form, even with the use of a statistical endpoint, lacks some of the sensitivity of the standard method. The challenge is to develop strategies for further refinement of this approach.
Fraysse, Bodvaël; Barthélémy, Inès; Qannari, El Mostafa; Rouger, Karl; Thorin, Chantal; Blot, Stéphane; Le Guiner, Caroline; Chérel, Yan; Hogrel, Jean-Yves
2017-04-12
Accelerometric analysis of gait abnormalities in golden retriever muscular dystrophy (GRMD) dogs is of limited sensitivity, and produces highly complex data. The use of discriminant analysis may enable simpler and more sensitive evaluation of treatment benefits in this important preclinical model. Accelerometry was performed twice monthly between the ages of 2 and 12 months on 8 healthy and 20 GRMD dogs. Seven accelerometric parameters were analysed using linear discriminant analysis (LDA). Manipulation of the dependent and independent variables produced three distinct models. The ability of each model to detect gait alterations and their pattern change with age was tested using a leave-one-out cross-validation approach. Selecting genotype (healthy or GRMD) as the dependent variable resulted in a model (Model 1) allowing a good discrimination between the gait phenotype of GRMD and healthy dogs. However, this model was not sufficiently representative of the disease progression. In Model 2, age in months was added as a supplementary dependent variable (GRMD_2 to GRMD_12 and Healthy_2 to Healthy_9.5), resulting in a high overall misclassification rate (83.2%). To improve accuracy, a third model (Model 3) was created in which age was also included as an explanatory variable. This resulted in an overall misclassification rate lower than 12%. Model 3 was evaluated using blinded data pertaining to 81 healthy and GRMD dogs. In all but one case, the model correctly matched gait phenotype to the actual genotype. Finally, we used Model 3 to reanalyse data from a previous study regarding the effects of immunosuppressive treatments on muscular dystrophy in GRMD dogs. Our model identified significant effect of immunosuppressive treatments on gait quality, corroborating the original findings, with the added advantages of direct statistical analysis with greater sensitivity and more comprehensible data representation. Gait analysis using LDA allows for improved analysis of accelerometry data by applying a decision-making analysis approach to the evaluation of preclinical treatment benefits in GRMD dogs.
Gamma Spectroscopy by Artificial Neural Network Coupled with MCNP
NASA Astrophysics Data System (ADS)
Sahiner, Huseyin
While neutron activation analysis is widely used in many areas, sensitivity of the analysis depends on how the analysis is conducted. Even though the sensitivity of the techniques carries error, compared to chemical analysis, its range is in parts per million or sometimes billion. Due to this sensitivity, the use of neutron activation analysis becomes important when analyzing bio-samples. Artificial neural network is an attractive technique for complex systems. Although there are neural network applications on spectral analysis, training by simulated data to analyze experimental data has not been made. This study offers an improvement on spectral analysis and optimization on neural network for the purpose. The work considers five elements that are considered as trace elements for bio-samples. However, the system is not limited to five elements. The only limitation of the study comes from data library availability on MCNP. A perceptron network was employed to identify five elements from gamma spectra. In quantitative analysis, better results were obtained when the neural fitting tool in MATLAB was used. As a training function, Levenberg-Marquardt algorithm was used with 23 neurons in the hidden layer with 259 gamma spectra in the input. Because the interest of the study deals with five elements, five neurons representing peak counts of five isotopes in the input layer were used. Five output neurons revealed mass information of these elements from irradiated kidney stones. Results showing max error of 17.9% in APA, 24.9% in UA, 28.2% in COM, 27.9% in STRU type showed the success of neural network approach in analyzing gamma spectra. This high error was attributed to Zn that has a very long decay half-life compared to the other elements. The simulation and experiments were made under certain experimental setup (3 hours irradiation, 96 hours decay time, 8 hours counting time). Nevertheless, the approach is subject to be generalized for different setups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunke, Elizabeth Clare; Urrego Blanco, Jorge Rolando; Urban, Nathan Mark
Coupled climate models have a large number of input parameters that can affect output uncertainty. We conducted a sensitivity analysis of sea ice proper:es and Arc:c related climate variables to 5 parameters in the HiLAT climate model: air-ocean turbulent exchange parameter (C), conversion of water vapor to clouds (cldfrc_rhminl) and of ice crystals to snow (micro_mg_dcs), snow thermal conduc:vity (ksno), and maximum snow grain size (rsnw_mlt). We used an elementary effect (EE) approach to rank their importance for output uncertainty. EE is an extension of one-at-a-time sensitivity analyses, but it is more efficient in sampling multi-dimensional parameter spaces. We lookedmore » for emerging relationships among climate variables across the model ensemble, and used causal discovery algorithms to establish potential pathways for those relationships.« less
Enzymatic signal amplification for sensitive detection of intracellular antigens by flow cytometry.
Karkmann, U; Radbruch, A; Hölzel, V; Scheffold, A
1999-11-19
Flow cytometry is the method of choice for the analysis of single cells with respect to the expression of specific antigens. Antigens can be detected with specific antibodies either on the cell surface or within the cells, after fixation and permeabilization of the cell membrane. Using conventional fluorochrome-labeled antibodies several thousand antigens are required for clear-cut separation of positive and negative cells. More sensitive reagents, e.g., magnetofluorescent liposomes conjugated to specific antibodies permit the detection of less than 200 molecules per cell but cannot be used for the detection of intracellular antigens. Here, we describe an enzymatic amplification technique (intracellular tyramine-based signal amplification, ITSA) for the sensitive cytometric analysis of intracellular cytokines by immunofluorescence. This approach results in a 10 to 15-fold improvement of the signal-to-noise ratio compared to conventional fluorochrome labeled antibodies and permits the detection of as few as 300-400 intracellular antigens per cell.
Autonomous Mars ascent and orbit rendezvous for earth return missions
NASA Technical Reports Server (NTRS)
Edwards, H. C.; Balmanno, W. F.; Cruz, Manuel I.; Ilgen, Marc R.
1991-01-01
The details of tha assessment of autonomous Mars ascent and orbit rendezvous for earth return missions are presented. Analyses addressing navigation system assessments, trajectory planning, targeting approaches, flight control guidance strategies, and performance sensitivities are included. Tradeoffs in the analysis and design process are discussed.
Meshless methods in shape optimization of linear elastic and thermoelastic solids
NASA Astrophysics Data System (ADS)
Bobaru, Florin
This dissertation proposes a meshless approach to problems in shape optimization of elastic and thermoelastic solids. The Element-free Galerkin (EFG) method is used for this purpose. The ability of the EFG to avoid remeshing, that is normally done in a Finite Element approach to correct highly distorted meshes, is clearly demonstrated by several examples. The shape optimization example of a thermal cooling fin shows a dramatic improvement in the objective compared to a previous FEM analysis. More importantly, the new solution, displaying large shape changes contrasted to the initial design, was completely missed by the FEM analysis. The EFG formulation given here for shape optimization "uncovers" new solutions that are, apparently, unobtainable via a FEM approach. This is one of the main achievements of our work. The variational formulations for the analysis problem and for the sensitivity problems are obtained with a penalty method for imposing the displacement boundary conditions. The continuum formulation is general and this facilitates 2D and 3D with minor differences from one another. Also, transient thermoelastic problems can use the present development at each time step to solve shape optimization problems for time-dependent thermal problems. For the elasticity framework, displacement sensitivity is obtained in the EFG context. Excellent agreements with analytical solutions for some test problems are obtained. The shape optimization of a fillet is carried out in great detail, and results show significant improvement of the EFG solution over the FEM or the Boundary Element Method solutions. In our approach we avoid differentiating the complicated EFG shape functions, with respect to the shape design parameters, by using a particular discretization for sensitivity calculations. Displacement and temperature sensitivities are formulated for the shape optimization of a linear thermoelastic solid. Two important examples considered in this work, the optimization of a thermal fin and of a uniformly loaded thermoelastic beam, reveal new characteristics of the EFG method in shape optimization applications. Among other advantages of the EFG method over traditional FEM treatments of shape optimization problems, some of the most important ones are shown to be: elimination of post-processing for stress and strain recovery that directly gives more accurate results in critical positions (near the boundaries, for example) for shape optimization problems; nodes movement flexibility that permits new, better shapes (previously missed by an FEM analysis) to be discovered. Several new research directions that need further consideration are exposed.
Mason, Alexina J; Gomes, Manuel; Grieve, Richard; Ulug, Pinar; Powell, Janet T; Carpenter, James
2017-01-01
Background/aims: The analyses of randomised controlled trials with missing data typically assume that, after conditioning on the observed data, the probability of missing data does not depend on the patient’s outcome, and so the data are ‘missing at random’ . This assumption is usually implausible, for example, because patients in relatively poor health may be more likely to drop out. Methodological guidelines recommend that trials require sensitivity analysis, which is best informed by elicited expert opinion, to assess whether conclusions are robust to alternative assumptions about the missing data. A major barrier to implementing these methods in practice is the lack of relevant practical tools for eliciting expert opinion. We develop a new practical tool for eliciting expert opinion and demonstrate its use for randomised controlled trials with missing data. Methods: We develop and illustrate our approach for eliciting expert opinion with the IMPROVE trial (ISRCTN 48334791), an ongoing multi-centre randomised controlled trial which compares an emergency endovascular strategy versus open repair for patients with ruptured abdominal aortic aneurysm. In the IMPROVE trial at 3 months post-randomisation, 21% of surviving patients did not complete health-related quality of life questionnaires (assessed by EQ-5D-3L). We address this problem by developing a web-based tool that provides a practical approach for eliciting expert opinion about quality of life differences between patients with missing versus complete data. We show how this expert opinion can define informative priors within a fully Bayesian framework to perform sensitivity analyses that allow the missing data to depend upon unobserved patient characteristics. Results: A total of 26 experts, of 46 asked to participate, completed the elicitation exercise. The elicited quality of life scores were lower on average for the patients with missing versus complete data, but there was considerable uncertainty in these elicited values. The missing at random analysis found that patients randomised to the emergency endovascular strategy versus open repair had higher average (95% credible interval) quality of life scores of 0.062 (−0.005 to 0.130). Our sensitivity analysis that used the elicited expert information as pooled priors found that the gain in average quality of life for the emergency endovascular strategy versus open repair was 0.076 (−0.054 to 0.198). Conclusion: We provide and exemplify a practical tool for eliciting the expert opinion required by recommended approaches to the sensitivity analyses of randomised controlled trials. We show how this approach allows the trial analysis to fully recognise the uncertainty that arises from making alternative, plausible assumptions about the reasons for missing data. This tool can be widely used in the design, analysis and interpretation of future trials, and to facilitate this, materials are available for download. PMID:28675302
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGraw, David; Hershey, Ronald L.
Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries.more » The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little variation in source-water fraction between the deterministic and Monte Carlo approaches, and therefore, little variation in travel times between approaches. Sensitivity analysis proved very useful for identifying the most important input constraints (dissolved-ion concentrations), which can reveal the variables that have the most influence on source-water fractions and carbon-14 travel times. Once these variables are determined, more focused effort can be applied to determining the proper distribution for each constraint. Second, Monte Carlo results for water-rock reaction modeling showed discrete and nonunique results. The NETPATH models provide the solutions that satisfy the constraints of upgradient and downgradient water chemistry. There can exist multiple, discrete solutions for any scenario and these discrete solutions cause grouping of results. As a result, the variability in output may not easily be represented by a single distribution or a mean and variance and care should be taken in the interpretation and reporting of results.« less
Pérez, Teresa; Makrestsov, Nikita; Garatt, John; Torlakovic, Emina; Gilks, C Blake; Mallett, Susan
The Canadian Immunohistochemistry Quality Control program monitors clinical laboratory performance for estrogen receptor and progesterone receptor tests used in breast cancer treatment management in Canada. Current methods assess sensitivity and specificity at each time point, compared with a reference standard. We investigate alternative performance analysis methods to enhance the quality assessment. We used 3 methods of analysis: meta-analysis of sensitivity and specificity of each laboratory across all time points; sensitivity and specificity at each time point for each laboratory; and fitting models for repeated measurements to examine differences between laboratories adjusted by test and time point. Results show 88 laboratories participated in quality control at up to 13 time points using typically 37 to 54 histology samples. In meta-analysis across all time points no laboratories have sensitivity or specificity below 80%. Current methods, presenting sensitivity and specificity separately for each run, result in wide 95% confidence intervals, typically spanning 15% to 30%. Models of a single diagnostic outcome demonstrated that 82% to 100% of laboratories had no difference to reference standard for estrogen receptor and 75% to 100% for progesterone receptor, with the exception of 1 progesterone receptor run. Laboratories with significant differences to reference standard identified with Generalized Estimating Equation modeling also have reduced performance by meta-analysis across all time points. The Canadian Immunohistochemistry Quality Control program has a good design, and with this modeling approach has sufficient precision to measure performance at each time point and allow laboratories with a significantly lower performance to be targeted for advice.
Identification of Proteus mirabilis Mutants with Increased Sensitivity to Antimicrobial Peptides
McCoy, Andrea J.; Liu, Hongjian; Falla, Timothy J.; Gunn, John S.
2001-01-01
Antimicrobial peptides (APs) are important components of the innate defenses of animals, plants, and microorganisms. However, some bacterial pathogens are resistant to the action of APs. For example, Proteus mirabilis is highly resistant to the action of APs, such as polymyxin B (PM), protegrin, and the synthetic protegrin analog IB-367. To better understand this resistance, a transposon mutagenesis approach was used to generate P. mirabilis mutants sensitive to APs. Four unique PM-sensitive mutants of P. mirabilis were identified (these mutants were >2 to >128 times more sensitive than the wild type). Two of these mutants were also sensitive to IB-367 (16 and 128 times more sensitive than the wild type). Lipopolysaccharide (LPS) profiles of the PM- and protegrin-sensitive mutants demonstrated marked differences in both the lipid A and O-antigen regions, while the PM-sensitive mutants appeared to have alterations of either lipid A or O antigen. Matrix-assisted laser desorption ionization–time of flight mass spectrometry analysis of the wild-type and PM-sensitive mutant lipid A showed species with one or two aminoarabinose groups, while lipid A from the PM- and protegrin-sensitive mutants was devoid of aminoarabinose. When the mutants were streaked on an agar-containing medium, the swarming motility of the PM- and protegrin-sensitive mutants was completely inhibited and the swarming motility of the mutants sensitive to only PM was markedly decreased. DNA sequence analysis of the mutagenized loci revealed similarities to an O-acetyltransferase (PM and protegrin sensitive) and ATP synthase and sap loci (PM sensitive). These data further support the role of LPS modifications as an elaborate mechanism in the resistance of certain bacterial species to APs and suggest that LPS surface charge alterations may play a role in P. mirabilis swarming motility. PMID:11408219
Chu, Haitao; Nie, Lei; Cole, Stephen R; Poole, Charles
2009-08-15
In a meta-analysis of diagnostic accuracy studies, the sensitivities and specificities of a diagnostic test may depend on the disease prevalence since the severity and definition of disease may differ from study to study due to the design and the population considered. In this paper, we extend the bivariate nonlinear random effects model on sensitivities and specificities to jointly model the disease prevalence, sensitivities and specificities using trivariate nonlinear random-effects models. Furthermore, as an alternative parameterization, we also propose jointly modeling the test prevalence and the predictive values, which reflect the clinical utility of a diagnostic test. These models allow investigators to study the complex relationship among the disease prevalence, sensitivities and specificities; or among test prevalence and the predictive values, which can reveal hidden information about test performance. We illustrate the proposed two approaches by reanalyzing the data from a meta-analysis of radiological evaluation of lymph node metastases in patients with cervical cancer and a simulation study. The latter illustrates the importance of carefully choosing an appropriate normality assumption for the disease prevalence, sensitivities and specificities, or the test prevalence and the predictive values. In practice, it is recommended to use model selection techniques to identify a best-fitting model for making statistical inference. In summary, the proposed trivariate random effects models are novel and can be very useful in practice for meta-analysis of diagnostic accuracy studies. Copyright 2009 John Wiley & Sons, Ltd.
Verzotto, Davide; M Teo, Audrey S; Hillmer, Axel M; Nagarajan, Niranjan
2016-01-01
Resolution of complex repeat structures and rearrangements in the assembly and analysis of large eukaryotic genomes is often aided by a combination of high-throughput sequencing and genome-mapping technologies (for example, optical restriction mapping). In particular, mapping technologies can generate sparse maps of large DNA fragments (150 kilo base pairs (kbp) to 2 Mbp) and thus provide a unique source of information for disambiguating complex rearrangements in cancer genomes. Despite their utility, combining high-throughput sequencing and mapping technologies has been challenging because of the lack of efficient and sensitive map-alignment algorithms for robustly aligning error-prone maps to sequences. We introduce a novel seed-and-extend glocal (short for global-local) alignment method, OPTIMA (and a sliding-window extension for overlap alignment, OPTIMA-Overlap), which is the first to create indexes for continuous-valued mapping data while accounting for mapping errors. We also present a novel statistical model, agnostic with respect to technology-dependent error rates, for conservatively evaluating the significance of alignments without relying on expensive permutation-based tests. We show that OPTIMA and OPTIMA-Overlap outperform other state-of-the-art approaches (1.6-2 times more sensitive) and are more efficient (170-200 %) and precise in their alignments (nearly 99 % precision). These advantages are independent of the quality of the data, suggesting that our indexing approach and statistical evaluation are robust, provide improved sensitivity and guarantee high precision.
Wagener, Thorsten; McGlynn, Brian
2015-01-01
Abstract Ungauged headwater basins are an abundant part of the river network, but dominant influences on headwater hydrologic response remain difficult to predict. To address this gap, we investigated the ability of a physically based watershed model (the Distributed Hydrology‐Soil‐Vegetation Model) to represent controls on metrics of hydrologic partitioning across five adjacent headwater subcatchments. The five study subcatchments, located in Tenderfoot Creek Experimental Forest in central Montana, have similar climate but variable topography and vegetation distribution. This facilitated a comparative hydrology approach to interpret how parameters that influence partitioning, detected via global sensitivity analysis, differ across catchments. Model parameters were constrained a priori using existing regional information and expert knowledge. Influential parameters were compared to perceptions of catchment functioning and its variability across subcatchments. Despite between‐catchment differences in topography and vegetation, hydrologic partitioning across all metrics and all subcatchments was sensitive to a similar subset of snow, vegetation, and soil parameters. Results also highlighted one subcatchment with low certainty in parameter sensitivity, indicating that the model poorly represented some complexities in this subcatchment likely because an important process is missing or poorly characterized in the mechanistic model. For use in other basins, this method can assess parameter sensitivities as a function of the specific ungauged system to which it is applied. Overall, this approach can be employed to identify dominant modeled controls on catchment response and their agreement with system understanding. PMID:27642197
Evaluation of a Mysis bioenergetics model
Chipps, S.R.; Bennett, D.H.
2002-01-01
Direct approaches for estimating the feeding rate of the opossum shrimp Mysis relicta can be hampered by variable gut residence time (evacuation rate models) and non-linear functional responses (clearance rate models). Bioenergetics modeling provides an alternative method, but the reliability of this approach needs to be evaluated using independent measures of growth and food consumption. In this study, we measured growth and food consumption for M. relicta and compared experimental results with those predicted from a Mysis bioenergetics model. For Mysis reared at 10??C, model predictions were not significantly different from observed values. Moreover, decomposition of mean square error indicated that 70% of the variation between model predictions and observed values was attributable to random error. On average, model predictions were within 12% of observed values. A sensitivity analysis revealed that Mysis respiration and prey energy density were the most sensitive parameters affecting model output. By accounting for uncertainty (95% CLs) in Mysis respiration, we observed a significant improvement in the accuracy of model output (within 5% of observed values), illustrating the importance of sensitive input parameters for model performance. These findings help corroborate the Mysis bioenergetics model and demonstrate the usefulness of this approach for estimating Mysis feeding rate.
A Bayesian approach to modelling the impact of hydrodynamic shear stress on biofilm deformation
Wilkinson, Darren J.; Jayathilake, Pahala Gedara; Rushton, Steve P.; Bridgens, Ben; Li, Bowen; Zuliani, Paolo
2018-01-01
We investigate the feasibility of using a surrogate-based method to emulate the deformation and detachment behaviour of a biofilm in response to hydrodynamic shear stress. The influence of shear force, growth rate and viscoelastic parameters on the patterns of growth, structure and resulting shape of microbial biofilms was examined. We develop a statistical modelling approach to this problem, using combination of Bayesian Poisson regression and dynamic linear models for the emulation. We observe that the hydrodynamic shear force affects biofilm deformation in line with some literature. Sensitivity results also showed that the expected number of shear events, shear flow, yield coefficient for heterotrophic bacteria and extracellular polymeric substance (EPS) stiffness per unit EPS mass are the four principal mechanisms governing the bacteria detachment in this study. The sensitivity of the model parameters is temporally dynamic, emphasising the significance of conducting the sensitivity analysis across multiple time points. The surrogate models are shown to perform well, and produced ≈ 480 fold increase in computational efficiency. We conclude that a surrogate-based approach is effective, and resulting biofilm structure is determined primarily by a balance between bacteria growth, viscoelastic parameters and applied shear stress. PMID:29649240
Kabir, Muhammad N.; Alginahi, Yasser M.
2014-01-01
This paper addresses the problems and threats associated with verification of integrity, proof of authenticity, tamper detection, and copyright protection for digital-text content. Such issues were largely addressed in the literature for images, audio, and video, with only a few papers addressing the challenge of sensitive plain-text media under known constraints. Specifically, with text as the predominant online communication medium, it becomes crucial that techniques are deployed to protect such information. A number of digital-signature, hashing, and watermarking schemes have been proposed that essentially bind source data or embed invisible data in a cover media to achieve its goal. While many such complex schemes with resource redundancies are sufficient in offline and less-sensitive texts, this paper proposes a hybrid approach based on zero-watermarking and digital-signature-like manipulations for sensitive text documents in order to achieve content originality and integrity verification without physically modifying the cover text in anyway. The proposed algorithm was implemented and shown to be robust against undetected content modifications and is capable of confirming proof of originality whilst detecting and locating deliberate/nondeliberate tampering. Additionally, enhancements in resource utilisation and reduced redundancies were achieved in comparison to traditional encryption-based approaches. Finally, analysis and remarks are made about the current state of the art, and future research issues are discussed under the given constraints. PMID:25254247
Wang, Ming; Long, Qi
2016-09-01
Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.
Voulgarelis, Dimitrios; Velayudhan, Ajoy; Smith, Frank
2017-01-01
Agent-based models provide a formidable tool for exploring complex and emergent behaviour of biological systems as well as accurate results but with the drawback of needing a lot of computational power and time for subsequent analysis. On the other hand, equation-based models can more easily be used for complex analysis in a much shorter timescale. This paper formulates an ordinary differential equations and stochastic differential equations model to capture the behaviour of an existing agent-based model of tumour cell reprogramming and applies it to optimization of possible treatment as well as dosage sensitivity analysis. For certain values of the parameter space a close match between the equation-based and agent-based models is achieved. The need for division of labour between the two approaches is explored. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrivastava, Manish; Zhao, Chun; Easter, Richard C.
We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recentmore » work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance. This study highlights the large sensitivity of SOA loadings to the particle-phase transformation of SOA volatility, which is neglected in most previous models.« less
Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, J. R.; Urban, N. M.
2015-12-01
Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.
NASA Astrophysics Data System (ADS)
Dasgupta, Sambarta
Transient stability and sensitivity analysis of power systems are problems of enormous academic and practical interest. These classical problems have received renewed interest, because of the advancement in sensor technology in the form of phasor measurement units (PMUs). The advancement in sensor technology has provided unique opportunity for the development of real-time stability monitoring and sensitivity analysis tools. Transient stability problem in power system is inherently a problem of stability analysis of the non-equilibrium dynamics, because for a short time period following a fault or disturbance the system trajectory moves away from the equilibrium point. The real-time stability decision has to be made over this short time period. However, the existing stability definitions and hence analysis tools for transient stability are asymptotic in nature. In this thesis, we discover theoretical foundations for the short-term transient stability analysis of power systems, based on the theory of normally hyperbolic invariant manifolds and finite time Lyapunov exponents, adopted from geometric theory of dynamical systems. The theory of normally hyperbolic surfaces allows us to characterize the rate of expansion and contraction of co-dimension one material surfaces in the phase space. The expansion and contraction rates of these material surfaces can be computed in finite time. We prove that the expansion and contraction rates can be used as finite time transient stability certificates. Furthermore, material surfaces with maximum expansion and contraction rate are identified with the stability boundaries. These stability boundaries are used for computation of stability margin. We have used the theoretical framework for the development of model-based and model-free real-time stability monitoring methods. Both the model-based and model-free approaches rely on the availability of high resolution time series data from the PMUs for stability prediction. The problem of sensitivity analysis of power system, subjected to changes or uncertainty in load parameters and network topology, is also studied using the theory of normally hyperbolic manifolds. The sensitivity analysis is used for the identification and rank ordering of the critical interactions and parameters in the power network. The sensitivity analysis is carried out both in finite time and in asymptotic. One of the distinguishing features of the asymptotic sensitivity analysis is that the asymptotic dynamics of the system is assumed to be a periodic orbit. For asymptotic sensitivity analysis we employ combination of tools from ergodic theory and geometric theory of dynamical systems.
Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T
2016-02-01
The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.
Wu, Xia; Li, Juan; Ayutyanont, Napatkamon; Protas, Hillary; Jagust, William; Fleisher, Adam; Reiman, Eric; Yao, Li; Chen, Kewei
2013-01-01
Given a single index, the receiver operational characteristic (ROC) curve analysis is routinely utilized for characterizing performances in distinguishing two conditions/groups in terms of sensitivity and specificity. Given the availability of multiple data sources (referred to as multi-indices), such as multimodal neuroimaging data sets, cognitive tests, and clinical ratings and genomic data in Alzheimer’s disease (AD) studies, the single-index-based ROC underutilizes all available information. For a long time, a number of algorithmic/analytic approaches combining multiple indices have been widely used to simultaneously incorporate multiple sources. In this study, we propose an alternative for combining multiple indices using logical operations, such as “AND,” “OR,” and “at least n” (where n is an integer), to construct multivariate ROC (multiV-ROC) and characterize the sensitivity and specificity statistically associated with the use of multiple indices. With and without the “leave-one-out” cross-validation, we used two data sets from AD studies to showcase the potentially increased sensitivity/specificity of the multiV-ROC in comparison to the single-index ROC and linear discriminant analysis (an analytic way of combining multi-indices). We conclude that, for the data sets we investigated, the proposed multiV-ROC approach is capable of providing a natural and practical alternative with improved classification accuracy as compared to univariate ROC and linear discriminant analysis.
Wu, Xia; Li, Juan; Ayutyanont, Napatkamon; Protas, Hillary; Jagust, William; Fleisher, Adam; Reiman, Eric; Yao, Li; Chen, Kewei
2014-01-01
Given a single index, the receiver operational characteristic (ROC) curve analysis is routinely utilized for characterizing performances in distinguishing two conditions/groups in terms of sensitivity and specificity. Given the availability of multiple data sources (referred to as multi-indices), such as multimodal neuroimaging data sets, cognitive tests, and clinical ratings and genomic data in Alzheimer’s disease (AD) studies, the single-index-based ROC underutilizes all available information. For a long time, a number of algorithmic/analytic approaches combining multiple indices have been widely used to simultaneously incorporate multiple sources. In this study, we propose an alternative for combining multiple indices using logical operations, such as “AND,” “OR,” and “at least n” (where n is an integer), to construct multivariate ROC (multiV-ROC) and characterize the sensitivity and specificity statistically associated with the use of multiple indices. With and without the “leave-one-out” cross-validation, we used two data sets from AD studies to showcase the potentially increased sensitivity/specificity of the multiV-ROC in comparison to the single-index ROC and linear discriminant analysis (an analytic way of combining multi-indices). We conclude that, for the data sets we investigated, the proposed multiV-ROC approach is capable of providing a natural and practical alternative with improved classification accuracy as compared to univariate ROC and linear discriminant analysis. PMID:23702553
Design principles of water sensitive in settlement area on the river banks
NASA Astrophysics Data System (ADS)
Ryanti, E.; Hasriyanti, N.; Utami, W. D.
2018-03-01
This research will formulate the principle of designing settlement area of Kapuas River Pontianak with the approach of water sensitive concept of urban design (WSUD) the densely populated settlement area. By using a case study the approach that is a dense settlement area located on the banks of the river with literature study techniques to formulate the aspects considered and components that are set in the design, descriptive analysis with the rationalistic paradigm for identification characteristics of the settlement in the river banks areas with consideration of WSUD elements and formulate the principles of designing water-sensitive settlement areas. This research is important to do because the problems related to the water management system in the existing riverside settlement in Pontianak has not been maximal to do. So the primary of this research contains several objectives that will be achieved that is identifying the characteristics of riverside settlement area based on consideration of design aspects of the area that are sensitive to water and the principle of designing the area so that the existing problem structure will be formulated in relation to the community’s need for infrastructure in settlement environment and formulate and develop appropriate technology guidelines for integrated water management systems in riverside settlement areas and design techniques for water-sensitive settlements (WSUD).
System analysis in rotorcraft design: The past decade
NASA Technical Reports Server (NTRS)
Galloway, Thomas L.
1988-01-01
Rapid advances in the technology of electronic digital computers and the need for an integrated synthesis approach in developing future rotorcraft programs has led to increased emphasis on system analysis techniques in rotorcraft design. The task in systems analysis is to deal with complex, interdependent, and conflicting requirements in a structured manner so rational and objective decisions can be made. Whether the results are wisdom or rubbish depends upon the validity and sometimes more importantly, the consistency of the inputs, the correctness of the analysis, and a sensible choice of measures of effectiveness to draw conclusions. In rotorcraft design this means combining design requirements, technology assessment, sensitivity analysis and reviews techniques currently in use by NASA and Army organizations in developing research programs and vehicle specifications for rotorcraft. These procedures span simple graphical approaches to comprehensive analysis on large mainframe computers. Examples of recent applications to military and civil missions are highlighted.
The Importance of Proving the Null
Gallistel, C. R.
2010-01-01
Null hypotheses are simple, precise, and theoretically important. Conventional statistical analysis cannot support them; Bayesian analysis can. The challenge in a Bayesian analysis is to formulate a suitably vague alternative, because the vaguer the alternative is (the more it spreads out the unit mass of prior probability), the more the null is favored. A general solution is a sensitivity analysis: Compute the odds for or against the null as a function of the limit(s) on the vagueness of the alternative. If the odds on the null approach 1 from above as the hypothesized maximum size of the possible effect approaches 0, then the data favor the null over any vaguer alternative to it. The simple computations and the intuitive graphic representation of the analysis are illustrated by the analysis of diverse examples from the current literature. They pose 3 common experimental questions: (a) Are 2 means the same? (b) Is performance at chance? (c) Are factors additive? PMID:19348549
Fused-data transrectal EIT for prostate cancer imaging.
Murphy, Ethan K; Wu, Xiaotian; Halter, Ryan J
2018-05-25
Prostate cancer is a significant problem affecting 1 in 7 men. Unfortunately, the diagnostic gold-standard of ultrasound-guided biopsy misses 10%-30% of all cancers. The objective of this study was to develop an electrical impedance tomography (EIT) approach that has the potential to image the entire prostate using multiple impedance measurements recorded between electrodes integrated onto an end-fired transrectal ultrasound (TRUS) device and a biopsy probe (BP). Simulations and sensitivity analyses were used to investigate the best combination of electrodes, and measured tank experiments were used to evaluate a fused-data transrectal EIT (fd-TREIT) and BP approach. Simulations and sensitivity analysis revealed that (1) TREIT measurements are not sufficiently sensitive to image the whole prostate, (2) the combination of TREIT + BP measurements increases the sensitive region of TREIT-only measurements by 12×, and (3) the fusion of multiple TREIT + BP measurements collected during a routine or customized 12-core biopsy procedure can cover up to 76.1% or 94.1% of a nominal 50 cm 3 prostate, respectively. Three measured tank experiments of the fd-TREIT + BP approach successfully and accurately recovered the positions of 2-3 metal or plastic inclusions. The measured tank experiments represent important steps in the development of an algorithm that can combine EIT from multiple locations and from multiple probes-data that could be collected during a routine TRUS-guided 12-core biopsy. Overall, this result is a step towards a clinically deployable impedance imaging approach to scanning the entire prostate, which could significantly help to improve prostate cancer diagnosis.
Hu, Zhixiong; Cheng, Peng; Guo, Mingli; Zhang, Weinong; Qi, Yutang
2013-07-10
A novel approach of periodate oxidation coupled with high-performance liquid chromatography (HPLC)-fluorescence detection (FLD) for the quantitative determination of 3-chloro-1,2-propanediol (3-MCPD) has been established. The essence of this approach lies in the production of chloroacetaldehyde by the oxidization cleavage of 3-MCPD with sodium periodate and the HPLC analysis of chloroacetaldehyde monitored by an FLD detector after fluorescence derivatization with adenine. The experimental parameters relating to the efficiency of the derivative reaction such as concentration of adenine, chloroacetaldehyde reaction temperature, and time were studied. Under the optimized conditions, the proposed method can provide high sensitivity, good linearity (r(2) = 0.999), and repeatability (percent relative standard deviations between 2.57% and 3.44%), the limits of detection and quantification were 0.36 and 1.20 ng/mL, respectively, and the recoveries obtained for water samples were in the range 93.39-97.39%. This method has been successfully applied to the analysis of real water samples. Also this method has been successfully used for the analysis of vegetable oil samples after pretreatment with liquid-liquid extraction; the recoveries obtained by a spiking experiment with soybean oil ranged from 96.27% to 102.42%. In comparison with gas chromatography or gas chromatography-mass spectrometry, the proposed method can provide the advantages of simple instrumental requirement, easy operation, low cost, and high efficiency, thus making this approach another good choice for the sensitive determination of 3-MCPD.
2013-12-19
32 3.3 An Approach for Evaluating System-of-Systems Operational Benefits of a...delay of a flight under IMC ............................................... 41 Figure 15: Sensitivity of delay of each of the four segments to...85 Figure 43: Generic SoS node behaviors
The Cluster Sensitivity Index: A Basic Measure of Classification Robustness
ERIC Educational Resources Information Center
Hom, Willard C.
2010-01-01
Analysts of institutional performance have occasionally used a peer grouping approach in which they compared institutions only to other institutions with similar characteristics. Because analysts historically have used cluster analysis to define peer groups (i.e., the group of comparable institutions), the author proposes and demonstrates with…
New Method for Analysis of Multiple Anthelmintic Residues in Animal Tissue
USDA-ARS?s Scientific Manuscript database
For the first time, 39 of the major anthelmintics can be detected in one rapid and sensitive LC-MS/MS method, including the flukicides, which have been generally overlooked in surveillance programs. Utilizing the QuEChERS approach, residues were extracted from liver and milk using acetonitrile, sod...
USDA-ARS?s Scientific Manuscript database
Current morphometric methods that comprehensively measure shape cannot compare the disparate leaf shapes found in flowering plants and are sensitive to processing artifacts. Here we describe a persistent homology approach to measuring shape. Persistent homology is a topological method (concerned wit...
Strategies and Approaches to TPS Design
NASA Technical Reports Server (NTRS)
Kolodziej, Paul
2005-01-01
Thermal protection systems (TPS) insulate planetary probes and Earth re-entry vehicles from the aerothermal heating experienced during hypersonic deceleration to the planet s surface. The systems are typically designed with some additional capability to compensate for both variations in the TPS material and for uncertainties in the heating environment. This additional capability, or robustness, also provides a surge capability for operating under abnormal severe conditions for a short period of time, and for unexpected events, such as meteoroid impact damage, that would detract from the nominal performance. Strategies and approaches to developing robust designs must also minimize mass because an extra kilogram of TPS displaces one kilogram of payload. Because aircraft structures must be optimized for minimum mass, reliability-based design approaches for mechanical components exist that minimize mass. Adapting these existing approaches to TPS component design takes advantage of the extensive work, knowledge, and experience from nearly fifty years of reliability-based design of mechanical components. A Non-Dimensional Load Interference (NDLI) method for calculating the thermal reliability of TPS components is presented in this lecture and applied to several examples. A sensitivity analysis from an existing numerical simulation of a carbon phenolic TPS provides insight into the effects of the various design parameters, and is used to demonstrate how sensitivity analysis may be used with NDLI to develop reliability-based designs of TPS components.
Wang, Chao; Ye, Min; Cheng, Liang; Li, Rui; Zhu, Wenwen; Shi, Zhen; Fan, Chunhai; He, Jinkang; Liu, Jian; Liu, Zhuang
2015-06-01
The development of sensitive and convenient methods for detection, enrichment, and analysis of circulating tumor cells (CTCs), which serve as an importance diagnostic indicator for metastatic progression of cancer, has received tremendous attention in recent years. In this work, a new approach characteristic of simultaneous CTC capture and detection is developed by integrating a microfluidic silicon nanowire (SiNW) array with multifunctional magnetic upconversion nanoparticles (MUNPs). The MUNPs were conjugated with anti-EpCAM antibody, thus capable to specifically recognize tumor cells in the blood samples and pull them down under an external magnetic field. The capture efficiency of CTCs was further improved by the integration with a microfluidic SiNW array. Due to the autofluorescence free nature in upconversion luminescence (UCL) imaging, our approach allows for highly sensitive detection of small numbers of tumor cells, which afterward could be collected for further analysis and re-culturing. We have further demonstrated that this approach can be applied to detect CTCs in clinical blood samples from lung cancer patients, and obtained consistent results by analyzing the UCL signals and the clinical outcomes of lung cancer metastasis. Therefore our approach represents a promising platform in CTC capture and detection with potential clinical utilization in cancer diagnosis and prognosis. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rieger, Vanessa S.; Dietmüller, Simone; Ponater, Michael
2017-10-01
Different strengths and types of radiative forcings cause variations in the climate sensitivities and efficacies. To relate these changes to their physical origin, this study tests whether a feedback analysis is a suitable approach. For this end, we apply the partial radiative perturbation method. Combining the forward and backward calculation turns out to be indispensable to ensure the additivity of feedbacks and to yield a closed forcing-feedback-balance at top of the atmosphere. For a set of CO2-forced simulations, the climate sensitivity changes with increasing forcing. The albedo, cloud and combined water vapour and lapse rate feedback are found to be responsible for the variations in the climate sensitivity. An O3-forced simulation (induced by enhanced NOx and CO surface emissions) causes a smaller efficacy than a CO2-forced simulation with a similar magnitude of forcing. We find that the Planck, albedo and most likely the cloud feedback are responsible for this effect. Reducing the radiative forcing impedes the statistical separability of feedbacks. We additionally discuss formal inconsistencies between the common ways of comparing climate sensitivities and feedbacks. Moreover, methodical recommendations for future work are given.
Spectral degree of polarization uniformity for polarization-sensitive OCT
NASA Astrophysics Data System (ADS)
Baumann, Bernhard; Zotter, Stefan; Pircher, Michael; Götzinger, Erich; Rauscher, Sabine; Glösmann, Martin; Lammer, Jan; Schmidt-Erfurth, Ursula; Gröger, Marion; Hitzenberger, Christoph K.
2015-12-01
Depolarization of light can be measured by polarization-sensitive optical coherence tomography (PS-OCT) and has been used to improve tissue discrimination as well as segmentation of pigmented structures. Most approaches to depolarization assessment for PS-OCT - such as the degree of polarization uniformity (DOPU) - rely on measuring the uniformity of polarization states using spatial evaluation kernels. In this article, we present a different approach which exploits the spectral dimension. We introduce the spectral DOPU for the pixelwise analysis of polarization state variations between sub-bands of the broadband light source spectrum. Alongside a comparison with conventional spatial and temporal DOPU algorithms, we demonstrate imaging in the healthy human retina, and apply the technique for contrasting hard exudates in diabetic retinopathy and investigating the pigment epithelium of the rat iris.
Mapping and analysis of phosphorylation sites: a quick guide for cell biologists
Dephoure, Noah; Gould, Kathleen L.; Gygi, Steven P.; Kellogg, Douglas R.
2013-01-01
A mechanistic understanding of signaling networks requires identification and analysis of phosphorylation sites. Mass spectrometry offers a rapid and highly sensitive approach to mapping phosphorylation sites. However, mass spectrometry has significant limitations that must be considered when planning to carry out phosphorylation-site mapping. Here we provide an overview of key information that should be taken into consideration before beginning phosphorylation-site analysis, as well as a step-by-step guide for carrying out successful experiments. PMID:23447708
Intergrated Systems Biology Approach for Ovarian Cancer Biomarker Discovery — EDRN Public Portal
The overall objective is to validate serum protein markers for early diagnosis of ovarian cancer with the ultimate goal being to develop a multiparametric panel consisting of 2-4 novel markers with 10 known markers for phase 3 analysis. In phase 1, we will screen for markers able to pass a threshold of 98% specificity and 30% sensitivity in a cohort of 300 women. Markers that pass phase 1 validation will be investigated in a phase 2 PRoBE cohort with a 98% specificity and 70% sensitivity cut-off. Finally, markers that pass phase 2 validation will be evaluated in EDRN CVC laboratory specimens with a cut-off of > 98% specificity and 90% sensitivity.
Optimizing sensitivity to γ with B0→D K+π-, D →KS0π+π- double Dalitz plot analysis
NASA Astrophysics Data System (ADS)
Craik, D.; Gershon, T.; Poluektov, A.
2018-03-01
Two of the most powerful methods currently used to determine the angle γ of the CKM Unitarity Triangle exploit B+→D K+, D →KS0π+π- decays and B0→D K+π-, D →K+K-, π+π- decays. It is possible to combine the strengths of both approaches in a "double Dalitz plot" analysis of B0→D K+π-, D →KS0π+π- decays. The potential sensitivity of such an analysis is investigated in the light of recently published experimental information on the B0→D K+π- decay. The formalism is also expanded, compared to previous discussions in the literature, to allow B0→D K+π- with any subsequent D decay to be included.
Zhao, Liping; Zhang, Zefeng; Kolm, Paul; Jasper, Susan; Lewis, Cheryl; Klein, Allan; Weintraub, William
2008-02-01
The ACUTE II study demonstrated that transesophageal echocardiographically guided cardioversion with enoxaparin in patients with atrial fibrillation was associated with shorter initial hospital stay, more normal sinus rhythm at 5 weeks, and no significant differences in stroke, bleeding, or death compared with unfractionated heparin (UFH). The present study evaluated resource use and costs in enoxaparin (n=76) and UFH (n=79) during 5-week follow-up. Resources included initial and subsequent hospitalizations, study drugs, outpatient services, and emergency room visits. Two costing approaches were employed for the hospitalization costing. The first approach was based on the UB-92 formulation of hospital bill and diagnosis-related group. The second approach was based on UB-92 and imputation using multivariable linear regression. Costs for outpatient and emergency room visits were determined from the Medicare fee schedule. Sensitivity analysis was performed to assess the robustness of the results. A bootstrap resample approach was used to obtain the confidence interval (CI) for the cost differences. Costs of initial and subsequent hospitalizations, outpatient procedures, and emergency room visits were lower in the enoxaparin group. Average total costs remained significantly lower for the enoxaparin group for the 2 costing approaches ($5,800 vs $8,167, difference $2,367, 95% CI 855 to 4,388, for the first approach; $7,942 vs $10,076, difference $2,134, 95% CI 437 to 4,207, for the second approach). Sensitivity analysis showed that cost differences between strategies are robust to variation of drug costs. In conclusion, the use of enoxaparin as a bridging therapy is a cost-saving strategy (similar clinical outcomes and lower costs) for atrial fibrillation.
A Non-Stationary Approach for Estimating Future Hydroclimatic Extremes Using Monte-Carlo Simulation
NASA Astrophysics Data System (ADS)
Byun, K.; Hamlet, A. F.
2017-12-01
There is substantial evidence that observed hydrologic extremes (e.g. floods, extreme stormwater events, and low flows) are changing and that climate change will continue to alter the probability distributions of hydrologic extremes over time. These non-stationary risks imply that conventional approaches for designing hydrologic infrastructure (or making other climate-sensitive decisions) based on retrospective analysis and stationary statistics will become increasingly problematic through time. To develop a framework for assessing risks in a non-stationary environment our study develops a new approach using a super ensemble of simulated hydrologic extremes based on Monte Carlo (MC) methods. Specifically, using statistically downscaled future GCM projections from the CMIP5 archive (using the Hybrid Delta (HD) method), we extract daily precipitation (P) and temperature (T) at 1/16 degree resolution based on a group of moving 30-yr windows within a given design lifespan (e.g. 10, 25, 50-yr). Using these T and P scenarios we simulate daily streamflow using the Variable Infiltration Capacity (VIC) model for each year of the design lifespan and fit a Generalized Extreme Value (GEV) probability distribution to the simulated annual extremes. MC experiments are then used to construct a random series of 10,000 realizations of the design lifespan, estimating annual extremes using the estimated unique GEV parameters for each individual year of the design lifespan. Our preliminary results for two watersheds in Midwest show that there are considerable differences in the extreme values for a given percentile between conventional MC and non-stationary MC approach. Design standards based on our non-stationary approach are also directly dependent on the design lifespan of infrastructure, a sensitivity which is notably absent from conventional approaches based on retrospective analysis. The experimental approach can be applied to a wide range of hydroclimatic variables of interest.
New technologies for advanced three-dimensional optimum shape design in aeronautics
NASA Astrophysics Data System (ADS)
Dervieux, Alain; Lanteri, Stéphane; Malé, Jean-Michel; Marco, Nathalie; Rostaing-Schmidt, Nicole; Stoufflet, Bruno
1999-05-01
The analysis of complex flows around realistic aircraft geometries is becoming more and more predictive. In order to obtain this result, the complexity of flow analysis codes has been constantly increasing, involving more refined fluid models and sophisticated numerical methods. These codes can only run on top computers, exhausting their memory and CPU capabilities. It is, therefore, difficult to introduce best analysis codes in a shape optimization loop: most previous works in the optimum shape design field used only simplified analysis codes. Moreover, as the most popular optimization methods are the gradient-based ones, the more complex the flow solver, the more difficult it is to compute the sensitivity code. However, emerging technologies are contributing to make such an ambitious project, of including a state-of-the-art flow analysis code into an optimisation loop, feasible. Among those technologies, there are three important issues that this paper wishes to address: shape parametrization, automated differentiation and parallel computing. Shape parametrization allows faster optimization by reducing the number of design variable; in this work, it relies on a hierarchical multilevel approach. The sensitivity code can be obtained using automated differentiation. The automated approach is based on software manipulation tools, which allow the differentiation to be quick and the resulting differentiated code to be rather fast and reliable. In addition, the parallel algorithms implemented in this work allow the resulting optimization software to run on increasingly larger geometries. Copyright
Devpura, Suneetha; Pattamadilok, Bensachee; Syed, Zain U; Vemulapalli, Pranita; Henderson, Marsha; Rehse, Steven J; Hamzavi, Iltefat; Lim, Henry W; Naik, Ratna
2011-06-01
Quantification of skin changes due to acanthosis nigricans (AN), a disorder common among insulin-resistant diabetic and obese individuals, was investigated using two optical techniques: diffuse reflectance spectroscopy (DRS) and colorimetry. Measurements were obtained from AN lesions on the neck and two control sites of eight AN patients. A principal component/discriminant function analysis successfully differentiated between AN lesion and normal skin with 87.7% sensitivity and 94.8% specificity in DRS measurements and 97.2% sensitivity and 96.4% specificity in colorimetry measurements.
Laser speckle tracking for monitoring and analysis of retinal photocoagulation
NASA Astrophysics Data System (ADS)
Seifert, Eric; Bliedtner, Katharina; Brinkmann, Ralf
2014-02-01
Laser coagulation of the retina is an established treatment for several retinal diseases. The absorbed laser energy and thus the induced thermal damage varies with the transmittance and scattering properties of the anterior eye media and with the pigmentation of the fundus. The temperature plays the most important role in the coagulation process. An established approach to measure a mean retinal temperature rise is optoacoustics, however it provides limited information on the coagulation. Phase sensitive OCT potentially offers a three dimensional temporally resolved temperature distribution but is very sensitive to slightest movements which are clinically hard to avoid. We develop an optical technique able to monitor and quantify thermally and coagulation induced tissue movements (expansions and contractions) and changes in the tissue structure by dynamic laser speckle analysis (LSA) offering a 2D map of the affected area. A frequency doubled Nd:YAG laser (532nm) is used for photocoagulation. Enucleated porcine eyes are used as targets. The spot is 100μm. A Helium Neon laser (HeNe) is used for illumination. The backscattered light of a HeNe is captured with a camera and the speckle pattern is analyzed. A Q-switched Nd:YLF laser is used for simultaneous temperature measurements with the optoacoustic approach. Radial tissue movements in the micrometer regime have been observed. The signals evaluation by optical flow algorithms and generalized differences tuned out to be able to distinguish between regions with and without immediate cell damage. Both approaches have shown a sensitivity of 93% and a specificity above 99% at their optimal threshold.
A special protection scheme utilizing trajectory sensitivity analysis in power transmission
NASA Astrophysics Data System (ADS)
Suriyamongkol, Dan
In recent years, new measurement techniques have provided opportunities to improve the North American Power System observability, control and protection. This dissertation discusses the formulation and design of a special protection scheme based on a novel utilization of trajectory sensitivity techniques with inputs consisting of system state variables and parameters. Trajectory sensitivity analysis (TSA) has been used in previous publications as a method for power system security and stability assessment, and the mathematical formulation of TSA lends itself well to some of the time domain power system simulation techniques. Existing special protection schemes often have limited sets of goals and control actions. The proposed scheme aims to maintain stability while using as many control actions as possible. The approach here will use the TSA in a novel way by using the sensitivities of system state variables with respect to state parameter variations to determine the state parameter controls required to achieve the desired state variable movements. The initial application will operate based on the assumption that the modeled power system has full system observability, and practical considerations will be discussed.
Kim, David M.; Zhang, Hairong; Zhou, Haiying; Du, Tommy; Wu, Qian; Mockler, Todd C.; Berezin, Mikhail Y.
2015-01-01
The optical signature of leaves is an important monitoring and predictive parameter for a variety of biotic and abiotic stresses, including drought. Such signatures derived from spectroscopic measurements provide vegetation indices – a quantitative method for assessing plant health. However, the commonly used metrics suffer from low sensitivity. Relatively small changes in water content in moderately stressed plants demand high-contrast imaging to distinguish affected plants. We present a new approach in deriving sensitive indices using hyperspectral imaging in a short-wave infrared range from 800 nm to 1600 nm. Our method, based on high spectral resolution (1.56 nm) instrumentation and image processing algorithms (quantitative histogram analysis), enables us to distinguish a moderate water stress equivalent of 20% relative water content (RWC). The identified image-derived indices 15XX nm/14XX nm (i.e. 1529 nm/1416 nm) were superior to common vegetation indices, such as WBI, MSI, and NDWI, with significantly better sensitivity, enabling early diagnostics of plant health. PMID:26531782
A Quantitative Approach to Scar Analysis
Khorasani, Hooman; Zheng, Zhong; Nguyen, Calvin; Zara, Janette; Zhang, Xinli; Wang, Joyce; Ting, Kang; Soo, Chia
2011-01-01
Analysis of collagen architecture is essential to wound healing research. However, to date no consistent methodologies exist for quantitatively assessing dermal collagen architecture in scars. In this study, we developed a standardized approach for quantitative analysis of scar collagen morphology by confocal microscopy using fractal dimension and lacunarity analysis. Full-thickness wounds were created on adult mice, closed by primary intention, and harvested at 14 days after wounding for morphometrics and standard Fourier transform-based scar analysis as well as fractal dimension and lacunarity analysis. In addition, transmission electron microscopy was used to evaluate collagen ultrastructure. We demonstrated that fractal dimension and lacunarity analysis were superior to Fourier transform analysis in discriminating scar versus unwounded tissue in a wild-type mouse model. To fully test the robustness of this scar analysis approach, a fibromodulin-null mouse model that heals with increased scar was also used. Fractal dimension and lacunarity analysis effectively discriminated unwounded fibromodulin-null versus wild-type skin as well as healing fibromodulin-null versus wild-type wounds, whereas Fourier transform analysis failed to do so. Furthermore, fractal dimension and lacunarity data also correlated well with transmission electron microscopy collagen ultrastructure analysis, adding to their validity. These results demonstrate that fractal dimension and lacunarity are more sensitive than Fourier transform analysis for quantification of scar morphology. PMID:21281794
Computational simulation and aerodynamic sensitivity analysis of film-cooled turbines
NASA Astrophysics Data System (ADS)
Massa, Luca
A computational tool is developed for the time accurate sensitivity analysis of the stage performance of hot gas, unsteady turbine components. An existing turbomachinery internal flow solver is adapted to the high temperature environment typical of the hot section of jet engines. A real gas model and film cooling capabilities are successfully incorporated in the software. The modifications to the existing algorithm are described; both the theoretical model and the numerical implementation are validated. The accuracy of the code in evaluating turbine stage performance is tested using a turbine geometry typical of the last stage of aeronautical jet engines. The results of the performance analysis show that the predictions differ from the experimental data by less than 3%. A reliable grid generator, applicable to the domain discretization of the internal flow field of axial flow turbine is developed. A sensitivity analysis capability is added to the flow solver, by rendering it able to accurately evaluate the derivatives of the time varying output functions. The complex Taylor's series expansion (CTSE) technique is reviewed. Two of them are used to demonstrate the accuracy and time dependency of the differentiation process. The results are compared with finite differences (FD) approximations. The CTSE is more accurate than the FD, but less efficient. A "black box" differentiation of the source code, resulting from the automated application of the CTSE, generates high fidelity sensitivity algorithms, but with low computational efficiency and high memory requirements. New formulations of the CTSE are proposed and applied. Selective differentiation of the method for solving the non-linear implicit residual equation leads to sensitivity algorithms with the same accuracy but improved run time. The time dependent sensitivity derivatives are computed in run times comparable to the ones required by the FD approach.
Pereira, Rui P A; Peplies, Jörg; Brettar, Ingrid; Höfle, Manfred G
2017-03-31
Next Generation Sequencing (NGS) has revolutionized the analysis of natural and man-made microbial communities by using universal primers for bacteria in a PCR based approach targeting the 16S rRNA gene. In our study we narrowed primer specificity to a single, monophyletic genus because for many questions in microbiology only a specific part of the whole microbiome is of interest. We have chosen the genus Legionella, comprising more than 20 pathogenic species, due to its high relevance for water-based respiratory infections. A new NGS-based approach was designed by sequencing 16S rRNA gene amplicons specific for the genus Legionella using the Illumina MiSeq technology. This approach was validated and applied to a set of representative freshwater samples. Our results revealed that the generated libraries presented a low average raw error rate per base (<0.5%); and substantiated the use of high-fidelity enzymes, such as KAPA HiFi, for increased sequence accuracy and quality. The approach also showed high in situ specificity (>95%) and very good repeatability. Only in samples in which the gammabacterial clade SAR86 was present more than 1% non-Legionella sequences were observed. Next-generation sequencing read counts did not reveal considerable amplification/sequencing biases and showed a sensitive as well as precise quantification of L. pneumophila along a dilution range using a spiked-in, certified genome standard. The genome standard and a mock community consisting of six different Legionella species demonstrated that the developed NGS approach was quantitative and specific at the level of individual species, including L. pneumophila. The sensitivity of our genus-specific approach was at least one order of magnitude higher compared to the universal NGS approach. Comparison of quantification by real-time PCR showed consistency with the NGS data. Overall, our NGS approach can determine the quantitative abundances of Legionella species, i. e. the complete Legionella microbiome, without the need for species-specific primers. The developed NGS approach provides a new molecular surveillance tool to monitor all Legionella species in qualitative and quantitative terms if a spiked-in genome standard is used to calibrate the method. Overall, the genus-specific NGS approach opens up a new avenue to massive parallel diagnostics in a quantitative, specific and sensitive way.
Sensitivity analysis of Repast computational ecology models with R/Repast.
Prestes García, Antonio; Rodríguez-Patón, Alfonso
2016-12-01
Computational ecology is an emerging interdisciplinary discipline founded mainly on modeling and simulation methods for studying ecological systems. Among the existing modeling formalisms, the individual-based modeling is particularly well suited for capturing the complex temporal and spatial dynamics as well as the nonlinearities arising in ecosystems, communities, or populations due to individual variability. In addition, being a bottom-up approach, it is useful for providing new insights on the local mechanisms which are generating some observed global dynamics. Of course, no conclusions about model results could be taken seriously if they are based on a single model execution and they are not analyzed carefully. Therefore, a sound methodology should always be used for underpinning the interpretation of model results. The sensitivity analysis is a methodology for quantitatively assessing the effect of input uncertainty in the simulation output which should be incorporated compulsorily to every work based on in-silico experimental setup. In this article, we present R/Repast a GNU R package for running and analyzing Repast Simphony models accompanied by two worked examples on how to perform global sensitivity analysis and how to interpret the results.
Ji, Xiaoyu; Liu, Xiaoqiang; Peng, Yuanxia; Zhan, Ruoting; Xu, Hui; Ge, Xijin
2017-12-09
Emodin has a strong antibacterial activity, including methicillin-resistant Staphylococcus aureus (MRSA). However, the mechanism by which emodin induces growth inhibition against MRSA remains unclear. In this study, the isobaric tags for relative and absolute quantitation (iTRAQ) proteomics approach was used to investigate the modes of action of emodin on a MRSA isolate and methicillin-sensitive S. aureus ATCC29213(MSSA). Proteomic analysis showed that expression levels of 145 and 122 proteins were changed significantly in MRSA and MSSA, respectively, after emodin treatment. Comparative analysis of the functions of differentially expressed proteins between the two strains was performed via bioinformatics tools blast2go and STRING database. Proteins related to pyruvate pathway imbalance induction, protein synthesis inhibition, and DNA synthesis suppression were found in both methicillin-sensitive and resistant strains. Moreover, Interference proteins related to membrane damage mechanism were also observed in MRSA. Our findings indicate that emodin is a potential antibacterial agent targeting MRSA via multiple mechanisms. Copyright © 2017 Elsevier Inc. All rights reserved.
Digital data processing system dynamic loading analysis
NASA Technical Reports Server (NTRS)
Lagas, J. J.; Peterka, J. J.; Tucker, A. E.
1976-01-01
Simulation and analysis of the Space Shuttle Orbiter Digital Data Processing System (DDPS) are reported. The mated flight and postseparation flight phases of the space shuttle's approach and landing test configuration were modeled utilizing the Information Management System Interpretative Model (IMSIM) in a computerized simulation modeling of the ALT hardware, software, and workload. System requirements simulated for the ALT configuration were defined. Sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and the sensitivity analyses, a test design is described for adapting, parameterizing, and executing the IMSIM. Varying load and stress conditions for the model execution are given. The analyses of the computer simulation runs were documented as results, conclusions, and recommendations for DDPS improvements.
Automatic differentiation as a tool in engineering design
NASA Technical Reports Server (NTRS)
Barthelemy, Jean-Francois; Hall, Laura E.
1992-01-01
Automatic Differentiation (AD) is a tool that systematically implements the chain rule of differentiation to obtain the derivatives of functions calculated by computer programs. AD is assessed as a tool for engineering design. The forward and reverse modes of AD, their computing requirements, as well as approaches to implementing AD are discussed. The application of two different tools to two medium-size structural analysis problems to generate sensitivity information typically necessary in an optimization or design situation is also discussed. The observation is made that AD is to be preferred to finite differencing in most cases, as long as sufficient computer storage is available; in some instances, AD may be the alternative to consider in lieu of analytical sensitivity analysis.
A sub-sampled approach to extremely low-dose STEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, A.; Luzi, L.; Yang, H.
The inpainting of randomly sub-sampled images acquired by scanning transmission electron microscopy (STEM) is an attractive method for imaging under low-dose conditions (≤ 1 e -Å 2) without changing either the operation of the microscope or the physics of the imaging process. We show that 1) adaptive sub-sampling increases acquisition speed, resolution, and sensitivity; and 2) random (non-adaptive) sub-sampling is equivalent, but faster than, traditional low-dose techniques. Adaptive sub-sampling opens numerous possibilities for the analysis of beam sensitive materials and in-situ dynamic processes at the resolution limit of the aberration corrected microscope and is demonstrated here for the analysis ofmore » the node distribution in metal-organic frameworks (MOFs).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chinthavali, Madhu Sudhan; Wang, Zhiqiang
This paper presents a detailed parametric sensitivity analysis for a wireless power transfer (WPT) system in electric vehicle application. Specifically, several key parameters for sensitivity analysis of a series-parallel (SP) WPT system are derived first based on analytical modeling approach, which includes the equivalent input impedance, active / reactive power, and DC voltage gain. Based on the derivation, the impact of primary side compensation capacitance, coupling coefficient, transformer leakage inductance, and different load conditions on the DC voltage gain curve and power curve are studied and analyzed. It is shown that the desired power can be achieved by just changingmore » frequency or voltage depending on the design value of coupling coefficient. However, in some cases both have to be modified in order to achieve the required power transfer.« less
Co-acting gene networks predict TRAIL responsiveness of tumour cells with high accuracy.
O'Reilly, Paul; Ortutay, Csaba; Gernon, Grainne; O'Connell, Enda; Seoighe, Cathal; Boyce, Susan; Serrano, Luis; Szegezdi, Eva
2014-12-19
Identification of differentially expressed genes from transcriptomic studies is one of the most common mechanisms to identify tumor biomarkers. This approach however is not well suited to identify interaction between genes whose protein products potentially influence each other, which limits its power to identify molecular wiring of tumour cells dictating response to a drug. Due to the fact that signal transduction pathways are not linear and highly interlinked, the biological response they drive may be better described by the relative amount of their components and their functional relationships than by their individual, absolute expression. Gene expression microarray data for 109 tumor cell lines with known sensitivity to the death ligand cytokine tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) was used to identify genes with potential functional relationships determining responsiveness to TRAIL-induced apoptosis. The machine learning technique Random Forest in the statistical environment "R" with backward elimination was used to identify the key predictors of TRAIL sensitivity and differentially expressed genes were identified using the software GeneSpring. Gene co-regulation and statistical interaction was assessed with q-order partial correlation analysis and non-rejection rate. Biological (functional) interactions amongst the co-acting genes were studied with Ingenuity network analysis. Prediction accuracy was assessed by calculating the area under the receiver operator curve using an independent dataset. We show that the gene panel identified could predict TRAIL-sensitivity with a very high degree of sensitivity and specificity (AUC=0·84). The genes in the panel are co-regulated and at least 40% of them functionally interact in signal transduction pathways that regulate cell death and cell survival, cellular differentiation and morphogenesis. Importantly, only 12% of the TRAIL-predictor genes were differentially expressed highlighting the importance of functional interactions in predicting the biological response. The advantage of co-acting gene clusters is that this analysis does not depend on differential expression and is able to incorporate direct- and indirect gene interactions as well as tissue- and cell-specific characteristics. This approach (1) identified a descriptor of TRAIL sensitivity which performs significantly better as a predictor of TRAIL sensitivity than any previously reported gene signatures, (2) identified potential novel regulators of TRAIL-responsiveness and (3) provided a systematic view highlighting fundamental differences between the molecular wiring of sensitive and resistant cell types.
Nursing-sensitive indicators: a concept analysis
Heslop, Liza; Lu, Sai
2014-01-01
Aim To report a concept analysis of nursing-sensitive indicators within the applied context of the acute care setting. Background The concept of ‘nursing sensitive indicators’ is valuable to elaborate nursing care performance. The conceptual foundation, theoretical role, meaning, use and interpretation of the concept tend to differ. The elusiveness of the concept and the ambiguity of its attributes may have hindered research efforts to advance its application in practice. Design Concept analysis. Data sources Using ‘clinical indicators’ or ‘quality of nursing care’ as subject headings and incorporating keyword combinations of ‘acute care’ and ‘nurs*’, CINAHL and MEDLINE with full text in EBSCOhost databases were searched for English language journal articles published between 2000–2012. Only primary research articles were selected. Methods A hybrid approach was undertaken, incorporating traditional strategies as per Walker and Avant and a conceptual matrix based on Holzemer's Outcomes Model for Health Care Research. Results The analysis revealed two main attributes of nursing-sensitive indicators. Structural attributes related to health service operation included: hours of nursing care per patient day, nurse staffing. Outcome attributes related to patient care included: the prevalence of pressure ulcer, falls and falls with injury, nosocomial selective infection and patient/family satisfaction with nursing care. Conclusion This concept analysis may be used as a basis to advance understandings of the theoretical structures that underpin both research and practical application of quality dimensions of nursing care performance. PMID:25113388
Deciphering the Epigenetic Code: An Overview of DNA Methylation Analysis Methods
Umer, Muhammad
2013-01-01
Abstract Significance: Methylation of cytosine in DNA is linked with gene regulation, and this has profound implications in development, normal biology, and disease conditions in many eukaryotic organisms. A wide range of methods and approaches exist for its identification, quantification, and mapping within the genome. While the earliest approaches were nonspecific and were at best useful for quantification of total methylated cytosines in the chunk of DNA, this field has seen considerable progress and development over the past decades. Recent Advances: Methods for DNA methylation analysis differ in their coverage and sensitivity, and the method of choice depends on the intended application and desired level of information. Potential results include global methyl cytosine content, degree of methylation at specific loci, or genome-wide methylation maps. Introduction of more advanced approaches to DNA methylation analysis, such as microarray platforms and massively parallel sequencing, has brought us closer to unveiling the whole methylome. Critical Issues: Sensitive quantification of DNA methylation from degraded and minute quantities of DNA and high-throughput DNA methylation mapping of single cells still remain a challenge. Future Directions: Developments in DNA sequencing technologies as well as the methods for identification and mapping of 5-hydroxymethylcytosine are expected to augment our current understanding of epigenomics. Here we present an overview of methodologies available for DNA methylation analysis with special focus on recent developments in genome-wide and high-throughput methods. While the application focus relates to cancer research, the methods are equally relevant to broader issues of epigenetics and redox science in this special forum. Antioxid. Redox Signal. 18, 1972–1986. PMID:23121567
Burnum-Johnson, Kristin E.; Nie, Song; Casey, Cameron P.; Monroe, Matthew E.; Orton, Daniel J.; Ibrahim, Yehia M.; Gritsenko, Marina A.; Clauss, Therese R. W.; Shukla, Anil K.; Moore, Ronald J.; Purvine, Samuel O.; Shi, Tujin; Qian, Weijun; Liu, Tao; Baker, Erin S.; Smith, Richard D.
2016-01-01
Current proteomic approaches include both broad discovery measurements and quantitative targeted analyses. In many cases, discovery measurements are initially used to identify potentially important proteins (e.g. candidate biomarkers) and then targeted studies are employed to quantify a limited number of selected proteins. Both approaches, however, suffer from limitations. Discovery measurements aim to sample the whole proteome but have lower sensitivity, accuracy, and quantitation precision than targeted approaches, whereas targeted measurements are significantly more sensitive but only sample a limited portion of the proteome. Herein, we describe a new approach that performs both discovery and targeted monitoring (DTM) in a single analysis by combining liquid chromatography, ion mobility spectrometry and mass spectrometry (LC-IMS-MS). In DTM, heavy labeled target peptides are spiked into tryptic digests and both the labeled and unlabeled peptides are detected using LC-IMS-MS instrumentation. Compared with the broad LC-MS discovery measurements, DTM yields greater peptide/protein coverage and detects lower abundance species. DTM also achieved detection limits similar to selected reaction monitoring (SRM) indicating its potential for combined high quality discovery and targeted analyses, which is a significant step toward the convergence of discovery and targeted approaches. PMID:27670688
Analysis of airfoil leading edge separation bubbles
NASA Technical Reports Server (NTRS)
Carter, J. E.; Vatsa, V. N.
1982-01-01
A local inviscid-viscous interaction technique was developed for the analysis of low speed airfoil leading edge transitional separation bubbles. In this analysis an inverse boundary layer finite difference analysis is solved iteratively with a Cauchy integral representation of the inviscid flow which is assumed to be a linear perturbation to a known global viscous airfoil analysis. Favorable comparisons with data indicate the overall validity of the present localized interaction approach. In addition numerical tests were performed to test the sensitivity of the computed results to the mesh size, limits on the Cauchy integral, and the location of the transition region.
Williams, Claire; Lewsey, James D; Briggs, Andrew H; Mackay, Daniel F
2017-05-01
This tutorial provides a step-by-step guide to performing cost-effectiveness analysis using a multi-state modeling approach. Alongside the tutorial, we provide easy-to-use functions in the statistics package R. We argue that this multi-state modeling approach using a package such as R has advantages over approaches where models are built in a spreadsheet package. In particular, using a syntax-based approach means there is a written record of what was done and the calculations are transparent. Reproducing the analysis is straightforward as the syntax just needs to be run again. The approach can be thought of as an alternative way to build a Markov decision-analytic model, which also has the option to use a state-arrival extended approach. In the state-arrival extended multi-state model, a covariate that represents patients' history is included, allowing the Markov property to be tested. We illustrate the building of multi-state survival models, making predictions from the models and assessing fits. We then proceed to perform a cost-effectiveness analysis, including deterministic and probabilistic sensitivity analyses. Finally, we show how to create 2 common methods of visualizing the results-namely, cost-effectiveness planes and cost-effectiveness acceptability curves. The analysis is implemented entirely within R. It is based on adaptions to functions in the existing R package mstate to accommodate parametric multi-state modeling that facilitates extrapolation of survival curves.
NASA Astrophysics Data System (ADS)
Hanoca, P.; Ramakrishna, H. V.
2018-03-01
This work is related to develop a methodology to model and simulate the TEHD using the sequential application of CFD and CSD. The FSI analyses are carried out using ANSYS Workbench. In this analysis steady state, 3D Navier-Stoke equations along with energy equation are solved. Liquid properties are introduced where the viscosity and density are the function of pressure and temperature. The cavitation phenomenon is adopted in the analysis. Numerical analysis has been carried at different speeds and surfaces temperatures. During the analysis, it was found that as speed increases, hydrodynamic pressures will also increases. The pressure profile obtained from the Roelands equation is more sensitive to the temperature as compared to the Barus equation. The stress distributions specify the significant positions in the bearing structure. The developed method is capable of giving latest approaching into the physics of elasto hydrodynamic lubrication.
A Benefit-Risk Analysis Approach to Capture Regulatory Decision-Making: Multiple Myeloma.
Raju, G K; Gurumurthi, Karthik; Domike, Reuben; Kazandjian, Dickran; Landgren, Ola; Blumenthal, Gideon M; Farrell, Ann; Pazdur, Richard; Woodcock, Janet
2018-01-01
Drug regulators around the world make decisions about drug approvability based on qualitative benefit-risk analysis. In this work, a quantitative benefit-risk analysis approach captures regulatory decision-making about new drugs to treat multiple myeloma (MM). MM assessments have been based on endpoints such as time to progression (TTP), progression-free survival (PFS), and objective response rate (ORR) which are different than benefit-risk analysis based on overall survival (OS). Twenty-three FDA decisions on MM drugs submitted to FDA between 2003 and 2016 were identified and analyzed. The benefits and risks were quantified relative to comparators (typically the control arm of the clinical trial) to estimate whether the median benefit-risk was positive or negative. A sensitivity analysis was demonstrated using ixazomib to explore the magnitude of uncertainty. FDA approval decision outcomes were consistent and logical using this benefit-risk framework. © 2017 American Society for Clinical Pharmacology and Therapeutics.
Esfahlani, Farnaz Zamani; Sayama, Hiroki; Visser, Katherine Frost; Strauss, Gregory P
2017-12-01
Objective: The Positive and Negative Syndrome Scale is a primary outcome measure in clinical trials examining the efficacy of antipsychotic medications. Although the Positive and Negative Syndrome Scale has demonstrated sensitivity as a measure of treatment change in studies using traditional univariate statistical approaches, its sensitivity to detecting network-level changes in dynamic relationships among symptoms has yet to be demonstrated using more sophisticated multivariate analyses. In the current study, we examined the sensitivity of the Positive and Negative Syndrome Scale to detecting antipsychotic treatment effects as revealed through network analysis. Design: Participants included 1,049 individuals diagnosed with psychotic disorders from the Phase I portion of the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) study. Of these participants, 733 were clinically determined to be treatment-responsive and 316 were found to be treatment-resistant. Item level data from the Positive and Negative Syndrome Scale were submitted to network analysis, and macroscopic, mesoscopic, and microscopic network properties were evaluated for the treatment-responsive and treatment-resistant groups at baseline and post-phase I antipsychotic treatment. Results: Network analysis indicated that treatment-responsive patients had more densely connected symptom networks after antipsychotic treatment than did treatment-responsive patients at baseline, and that symptom centralities increased following treatment. In contrast, symptom networks of treatment-resistant patients behaved more randomly before and after treatment. Conclusions: These results suggest that the Positive and Negative Syndrome Scale is sensitive to detecting treatment effects as revealed through network analysis. Its findings also provide compelling new evidence that strongly interconnected symptom networks confer an overall greater probability of treatment responsiveness in patients with psychosis, suggesting that antipsychotics achieve their effect by enhancing a number of central symptoms, which then facilitate reduction of other highly coupled symptoms in a network-like fashion.
Using global sensitivity analysis of demographic models for ecological impact assessment.
Aiello-Lammens, Matthew E; Akçakaya, H Resit
2017-02-01
Population viability analysis (PVA) is widely used to assess population-level impacts of environmental changes on species. When combined with sensitivity analysis, PVA yields insights into the effects of parameter and model structure uncertainty. This helps researchers prioritize efforts for further data collection so that model improvements are efficient and helps managers prioritize conservation and management actions. Usually, sensitivity is analyzed by varying one input parameter at a time and observing the influence that variation has over model outcomes. This approach does not account for interactions among parameters. Global sensitivity analysis (GSA) overcomes this limitation by varying several model inputs simultaneously. Then, regression techniques allow measuring the importance of input-parameter uncertainties. In many conservation applications, the goal of demographic modeling is to assess how different scenarios of impact or management cause changes in a population. This is challenging because the uncertainty of input-parameter values can be confounded with the effect of impacts and management actions. We developed a GSA method that separates model outcome uncertainty resulting from parameter uncertainty from that resulting from projected ecological impacts or simulated management actions, effectively separating the 2 main questions that sensitivity analysis asks. We applied this method to assess the effects of predicted sea-level rise on Snowy Plover (Charadrius nivosus). A relatively small number of replicate models (approximately 100) resulted in consistent measures of variable importance when not trying to separate the effects of ecological impacts from parameter uncertainty. However, many more replicate models (approximately 500) were required to separate these effects. These differences are important to consider when using demographic models to estimate ecological impacts of management actions. © 2016 Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Judson, Richard S.; Rabitz, Herschel
1987-04-01
The relationship between structure in the potential surface and classical mechanical observables is examined by means of functional sensitivity analysis. Functional sensitivities provide maps of the potential surface, highlighting those regions that play the greatest role in determining the behavior of observables. A set of differential equations for the sensitivities of the trajectory components are derived. These are then solved using a Green's function method. It is found that the sensitivities become singular at the trajectory turning points with the singularities going as η-3/2, with η being the distance from the nearest turning point. The sensitivities are zero outside of the energetically and dynamically allowed region of phase space. A second set of equations is derived from which the sensitivities of observables can be directly calculated. An adjoint Green's function technique is employed, providing an efficient method for numerically calculating these quantities. Sensitivity maps are presented for a simple collinear atom-diatom inelastic scattering problem and for two Henon-Heiles type Hamiltonians modeling intramolecular processes. It is found that the positions of the trajectory caustics in the bound state problem determine regions of the highest potential surface sensitivities. In the scattering problem (which is impulsive, so that ``sticky'' collisions did not occur), the positions of the turning points of the individual trajectory components determine the regions of high sensitivity. In both cases, these lines of singularities are superimposed on a rich background structure. Most interesting is the appearance of classical interference effects. The interference features in the sensitivity maps occur most noticeably where two or more lines of turning points cross. The important practical motivation for calculating the sensitivities derives from the fact that the potential is a function, implying that any direct attempt to understand how local potential regions affect the behavior of the observables by repeatedly and systematically altering the potential will be prohibitively expensive. The functional sensitivity method enables one to perform this analysis at a fraction of the computational labor required for the direct method.
Trigueros, José Antonio; Piñero, David P; Ismail, Mahmoud M
2016-01-01
AIM To define the financial and management conditions required to introduce a femtosecond laser system for cataract surgery in a clinic using a fuzzy logic approach. METHODS In the simulation performed in the current study, the costs associated to the acquisition and use of a commercially available femtosecond laser platform for cataract surgery (VICTUS, TECHNOLAS Perfect Vision GmbH, Bausch & Lomb, Munich, Germany) during a period of 5y were considered. A sensitivity analysis was performed considering such costs and the countable amortization of the system during this 5y period. Furthermore, a fuzzy logic analysis was used to obtain an estimation of the money income associated to each femtosecond laser-assisted cataract surgery (G). RESULTS According to the sensitivity analysis, the femtosecond laser system under evaluation can be profitable if 1400 cataract surgeries are performed per year and if each surgery can be invoiced more than $500. In contrast, the fuzzy logic analysis confirmed that the patient had to pay more per surgery, between $661.8 and $667.4 per surgery, without considering the cost of the intraocular lens (IOL). CONCLUSION A profitability of femtosecond laser systems for cataract surgery can be obtained after a detailed financial analysis, especially in those centers with large volumes of patients. The cost of the surgery for patients should be adapted to the real flow of patients with the ability of paying a reasonable range of cost. PMID:27500115
Sulcal depth-based cortical shape analysis in normal healthy control and schizophrenia groups
NASA Astrophysics Data System (ADS)
Lyu, Ilwoo; Kang, Hakmook; Woodward, Neil D.; Landman, Bennett A.
2018-03-01
Sulcal depth is an important marker of brain anatomy in neuroscience/neurological function. Previously, sulcal depth has been explored at the region-of-interest (ROI) level to increase statistical sensitivity to group differences. In this paper, we present a fully automated method that enables inferences of ROI properties from a sulcal region- focused perspective consisting of two main components: 1) sulcal depth computation and 2) sulcal curve-based refined ROIs. In conventional statistical analysis, the average sulcal depth measurements are employed in several ROIs of the cortical surface. However, taking the average sulcal depth over the full ROI blurs overall sulcal depth measurements which may result in reduced sensitivity to detect sulcal depth changes in neurological and psychiatric disorders. To overcome such a blurring effect, we focus on sulcal fundic regions in each ROI by filtering out other gyral regions. Consequently, the proposed method results in more sensitive to group differences than a traditional ROI approach. In the experiment, we focused on a cortical morphological analysis to sulcal depth reduction in schizophrenia with a comparison to the normal healthy control group. We show that the proposed method is more sensitivity to abnormalities of sulcal depth in schizophrenia; sulcal depth is significantly smaller in most cortical lobes in schizophrenia compared to healthy controls (p < 0.05).
Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines
Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.
2017-01-01
Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445
Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.
Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H
2017-04-01
Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Wang, Heye; Dou, Peng; Lü, Chenchen; Liu, Zhen
2012-07-13
Erythropoietin (EPO) is an important glycoprotein hormone. Recombinant human EPO (rhEPO) is an important therapeutic drug and can be also used as doping reagent in sports. The analysis of EPO glycoforms in pharmaceutical and sports areas greatly challenges analytical scientists from several aspects, among which sensitive detection and effective and facile sample preparation are two essential issues. Herein, we investigated new possibilities for these two aspects. Deep UV laser-induced fluorescence detection (deep UV-LIF) was established to detect the intrinsic fluorescence of EPO while an immuno-magnetic beads-based extraction (IMBE) was developed to specifically extract EPO glycoforms. Combined with capillary zone electrophoresis (CZE), CZE-deep UV-LIF allows high resolution glycoform profiling with improved sensitivity. The detection sensitivity was improved by one order of magnitude as compared with UV absorbance detection. An additional advantage is that the original glycoform distribution can be completely preserved because no fluorescent labeling is needed. By combining IMBE with CZE-deep UV-LIF, the overall detection sensitivity was 1.5 × 10⁻⁸ mol/L, which was enhanced by two orders of magnitude relative to conventional CZE with UV absorbance detection. It is applicable to the analysis of pharmaceutical preparations of EPO, but the sensitivity is insufficient for the anti-doping analysis of EPO in blood and urine. IMBE can be straightforward and effective approach for sample preparation. However, antibodies with high specificity were the key for application to urine samples because some urinary proteins can severely interfere the immuno-extraction. Copyright © 2012 Elsevier B.V. All rights reserved.
Iima, Mami; Kataoka, Masako; Kanao, Shotaro; Onishi, Natsuko; Kawai, Makiko; Ohashi, Akane; Sakaguchi, Rena; Toi, Masakazu; Togashi, Kaori
2018-05-01
Purpose To investigate the performance of integrated approaches that combined intravoxel incoherent motion (IVIM) and non-Gaussian diffusion parameters compared with the Breast Imaging and Reporting Data System (BI-RADS) to establish multiparameter thresholds scores or probabilities by using Bayesian analysis to distinguish malignant from benign breast lesions and their correlation with molecular prognostic factors. Materials and Methods Between May 2013 and March 2015, 411 patients were prospectively enrolled and 199 patients (allocated to training [n = 99] and validation [n = 100] sets) were included in this study. IVIM parameters (flowing blood volume fraction [fIVIM] and pseudodiffusion coefficient [D*]) and non-Gaussian diffusion parameters (theoretical apparent diffusion coefficient [ADC] at b value of 0 sec/mm 2 [ADC 0 ] and kurtosis [K]) by using IVIM and kurtosis models were estimated from diffusion-weighted image series (16 b values up to 2500 sec/mm 2 ), as well as a synthetic ADC (sADC) calculated by using b values of 200 and 1500 (sADC 200-1500 ) and a standard ADC calculated by using b values of 0 and 800 sec/mm 2 (ADC 0-800 ). The performance of two diagnostic approaches (combined parameter thresholds and Bayesian analysis) combining IVIM and diffusion parameters was evaluated and compared with BI-RADS performance. The Mann-Whitney U test and a nonparametric multiple comparison test were used to compare their performance to determine benignity or malignancy and as molecular prognostic biomarkers and subtypes of breast cancer. Results Significant differences were found between malignant and benign breast lesions for IVIM and non-Gaussian diffusion parameters (ADC 0 , K, fIVIM, fIVIM · D*, sADC 200-1500, and ADC 0-800 ; P < .05). Sensitivity and specificity for the validation set by radiologists A and B were as follows: sensitivity, 94.7% and 89.5%, and specificity, 75.0% and 79.2% for sADC 200-1500 , respectively; sensitivity, 94.7% and 96.1%, and specificity, 75.0% and 66.7%, for the combined thresholds approach, respectively; sensitivity, 92.1% and 92.1%, and specificity, 83.3% and 66.7%, for Bayesian analysis, respectively; and sensitivity and specificity, 100% and 79.2%, for BI-RADS, respectively. The significant difference in values of sADC 200-1500 in progesterone receptor status (P = .002) was noted. sADC 200-1500 was significantly different between histologic subtypes (P = .006). Conclusion Approaches that combined various IVIM and non-Gaussian diffusion MR imaging parameters may provide BI-RADS-equivalent scores almost comparable to BI-RADS categories without the use of contrast agents. Non-Gaussian diffusion parameters also differed by biologic prognostic factors. © RSNA, 2017 Online supplemental material is available for this article.
Behavior sensitivities for control augmented structures
NASA Technical Reports Server (NTRS)
Manning, R. A.; Lust, R. V.; Schmit, L. A.
1987-01-01
During the past few years it has been recognized that combining passive structural design methods with active control techniques offers the prospect of being able to find substantially improved designs. These developments have stimulated interest in augmenting structural synthesis by adding active control system design variables to those usually considered in structural optimization. An essential step in extending the approximation concepts approach to control augmented structural synthesis is the development of a behavior sensitivity analysis capability for determining rates of change of dynamic response quantities with respect to changes in structural and control system design variables. Behavior sensitivity information is also useful for man-machine interactive design as well as in the context of system identification studies. Behavior sensitivity formulations for both steady state and transient response are presented and the quality of the resulting derivative information is evaluated.
Evaluation of a toxicogenomic approach to the local lymph node assay (LLNA).
Boverhof, Darrell R; Gollapudi, B Bhaskar; Hotchkiss, Jon A; Osterloh-Quiroz, Mandy; Woolhiser, Michael R
2009-02-01
Genomic technologies have the potential to enhance and complement existing toxicology endpoints; however, assessment of these approaches requires a systematic evaluation including a robust experimental design with genomic endpoints anchored to traditional toxicology endpoints. The present study was conducted to assess the sensitivity of genomic responses when compared with the traditional local lymph node assay (LLNA) endpoint of lymph node cell proliferation and to evaluate the responses for their ability to provide insights into mode of action. Female BALB/c mice were treated with the sensitizer trimellitic anhydride (TMA), following the standard LLNA dosing regimen, at doses of 0.1, 1, or 10% and traditional tritiated thymidine ((3)HTdR) incorporation and gene expression responses were monitored in the auricular lymph nodes. Additional mice dosed with either vehicle or 10% TMA and sacrificed on day 4 or 10, were also included to examine temporal effects on gene expression. Analysis of (3)HTdR incorporation revealed TMA-induced stimulation indices of 2.8, 22.9, and 61.0 relative to vehicle with an EC(3) of 0.11%. Examination of the dose-response gene expression responses identified 9, 833, and 2122 differentially expressed genes relative to vehicle for the 0.1, 1, and 10% TMA dose groups, respectively. Calculation of EC(3) values for differentially expressed genes did not identify a response that was more sensitive than the (3)HTdR value, although a number of genes displayed comparable sensitivity. Examination of temporal responses revealed 1760, 1870, and 953 differentially expressed genes at the 4-, 6-, and 10-day time points respectively. Functional analysis revealed many responses displayed dose- and time-specific induction patterns within the functional categories of cellular proliferation and immune response, including numerous immunoglobin genes which were highly induced at the day 10 time point. Overall, these experiments have systematically illustrated the potential utility of genomic endpoints to enhance the LLNA and support further exploration of this approach through examination of a more diverse array of chemicals.
USDA-ARS?s Scientific Manuscript database
Numerical modeling is an economical and feasible approach for quantifying the effects of best management practices on phosphorus (P) loadings from agricultural fields. However, tools that simulate both surface and subsurface P pathways are limited and have not been robustly evaluated in tile-drained...
An earlier paper (Hattis et al., 2003) developed a quantitative likelihood-based statistical analysis of the differences in apparent sensitivity of rodents to mutagenic carcinogens across three life stages (fetal, birth-weaning, and weaning-60 days) relative to exposures in adult...
A Culturally Sensitive Analysis of Culture in the Context of Context: When Is Enough Enough?
ERIC Educational Resources Information Center
Kahn, Peter H., Jr.
Cultural context is not the sole source of human knowledge. Postmodern theory, in both its deconstructionist and affirmative approaches, offers an incomplete basis by which to study race, class, and gender, and undermines ethical interaction. Deconstructionism calls for the abandonment of generalizable research findings, asserting that the concept…
Cavill, Rachel; Kamburov, Atanas; Ellis, James K; Athersuch, Toby J; Blagrove, Marcus S C; Herwig, Ralf; Ebbels, Timothy M D; Keun, Hector C
2011-03-01
Using transcriptomic and metabolomic measurements from the NCI60 cell line panel, together with a novel approach to integration of molecular profile data, we show that the biochemical pathways associated with tumour cell chemosensitivity to platinum-based drugs are highly coincident, i.e. they describe a consensus phenotype. Direct integration of metabolome and transcriptome data at the point of pathway analysis improved the detection of consensus pathways by 76%, and revealed associations between platinum sensitivity and several metabolic pathways that were not visible from transcriptome analysis alone. These pathways included the TCA cycle and pyruvate metabolism, lipoprotein uptake and nucleotide synthesis by both salvage and de novo pathways. Extending the approach across a wide panel of chemotherapeutics, we confirmed the specificity of the metabolic pathway associations to platinum sensitivity. We conclude that metabolic phenotyping could play a role in predicting response to platinum chemotherapy and that consensus-phenotype integration of molecular profiling data is a powerful and versatile tool for both biomarker discovery and for exploring the complex relationships between biological pathways and drug response.
Using Data Mining for Wine Quality Assessment
NASA Astrophysics Data System (ADS)
Cortez, Paulo; Teixeira, Juliana; Cerdeira, António; Almeida, Fernando; Matos, Telmo; Reis, José
Certification and quality assessment are crucial issues within the wine industry. Currently, wine quality is mostly assessed by physicochemical (e.g alcohol levels) and sensory (e.g. human expert evaluation) tests. In this paper, we propose a data mining approach to predict wine preferences that is based on easily available analytical tests at the certification step. A large dataset is considered with white vinho verde samples from the Minho region of Portugal. Wine quality is modeled under a regression approach, which preserves the order of the grades. Explanatory knowledge is given in terms of a sensitivity analysis, which measures the response changes when a given input variable is varied through its domain. Three regression techniques were applied, under a computationally efficient procedure that performs simultaneous variable and model selection and that is guided by the sensitivity analysis. The support vector machine achieved promising results, outperforming the multiple regression and neural network methods. Such model is useful for understanding how physicochemical tests affect the sensory preferences. Moreover, it can support the wine expert evaluations and ultimately improve the production.
Liu, Yunhao; Mwapasa, Victor; Khairallah, Carole; Thwai, Kyaw L; Kalilani-Phiri, Linda; Ter Kuile, Feiko O; Meshnick, Steven R; Taylor, Steve M
2016-10-05
Placental malaria causes low birth weight and neonatal mortality in malaria-endemic areas. The diagnosis of placental malaria is important for program evaluation and clinical care, but is compromised by the suboptimal performance of current diagnostics. Using placental and peripheral blood specimens collected from delivering women in Malawi, we compared estimation of the operating characteristics of microscopy, rapid diagnostic test (RDT), polymerase chain reaction, and histopathology using both a traditional contingency table and a latent class analysis (LCA) approach. The prevalence of placental malaria by histopathology was 13.8%; concordance between tests was generally poor. Relative to histopathology, RDT sensitivity was 79.5% in peripheral and 66.2% in placental blood; using LCA, RDT sensitivities increased to 93.7% and 80.2%, respectively. Our results, if replicated in other cohorts, indicate that RDT testing of peripheral or placental blood may be suitable approaches to detect placental malaria for surveillance programs, including areas where intermittent preventive therapy in pregnancy is not used. © The American Society of Tropical Medicine and Hygiene.
Building a maintenance policy through a multi-criterion decision-making model
NASA Astrophysics Data System (ADS)
Faghihinia, Elahe; Mollaverdi, Naser
2012-08-01
A major competitive advantage of production and service systems is establishing a proper maintenance policy. Therefore, maintenance managers should make maintenance decisions that best fit their systems. Multi-criterion decision-making methods can take into account a number of aspects associated with the competitiveness factors of a system. This paper presents a multi-criterion decision-aided maintenance model with three criteria that have more influence on decision making: reliability, maintenance cost, and maintenance downtime. The Bayesian approach has been applied to confront maintenance failure data shortage. Therefore, the model seeks to make the best compromise between these three criteria and establish replacement intervals using Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE II), integrating the Bayesian approach with regard to the preference of the decision maker to the problem. Finally, using a numerical application, the model has been illustrated, and for a visual realization and an illustrative sensitivity analysis, PROMETHEE GAIA (the visual interactive module) has been used. Use of PROMETHEE II and PROMETHEE GAIA has been made with Decision Lab software. A sensitivity analysis has been made to verify the robustness of certain parameters of the model.
You, Joyce H S; Lui, Grace; Kam, Kai Man; Lee, Nelson L S
2015-04-01
We examined, from a Hong Kong healthcare providers' perspective, the cost-effectiveness of rapid diagnosis with Xpert in patients hospitalized for suspected active pulmonary tuberculosis (PTB). A decision tree was designed to simulate outcomes of three diagnostic assessment strategies in adult patients hospitalized for suspected active PTB: conventional approach, sputum smear plus Xpert for acid-fast bacilli (AFB) smear-negative, and a single sputum Xpert test. Model inputs were derived from the literature. Outcome measures were direct medical cost, one-year mortality rate, quality-adjusted life-years (QALYs) and incremental cost per QALY (ICER). In the base-case analysis, Xpert was more effective with higher QALYs gained and a lower mortality rate when compared with smear plus Xpert by an ICER of USD99. A conventional diagnostic approach was the least preferred option with the highest cost, lowest QALYs gained and highest mortality rate. Sensitivity analysis showed that Xpert would be the most cost-effective option if the sensitivity of sputum AFB smear microscopy was ≤74%. The probabilities of Xpert, smear plus Xpert and a conventional approach to be cost-effective were 94.5%, 5.5% and 0%, respectively, in 10,000 Monte Carlo simulations. The Xpert sputum test appears to be a highly cost-effective diagnostic strategy for patients with suspected active PTB in an intermediate burden area like Hong Kong. Copyright © 2015 The British Infection Association. Published by Elsevier Ltd. All rights reserved.
Kovac, Jason Ronald; Fantus, Jake; Lipshultz, Larry I; Fischer, Marc Anthony; Klinghoffer, Zachery
2014-09-01
Varicoceles are a common cause of male infertility; repair can be accomplished using either surgical or radiological means. We compare the cost-effectiveness of the gold standard, the microsurgical varicocele repair (MV), to the options of a nonmicrosurgical approach (NMV) and percutaneous embolization (PE) to manage varicocele-associated infertility. A Markov decision-analysis model was developed to estimate costs and pregnancy rates. Within the model, recurrences following MV and NMV were re-treated with PE and recurrences following PE were treated with repeat PE, MV or NMV. Pregnancy and recurrence rates were based on the literature, while costs were obtained from institutional and government supplied data. Univariate and probabilistic sensitivity-analyses were performed to determine the effects of the various parameters on model outcomes. Primary treatment with MV was the most cost-effective strategy at $5402 CAD (Canadian)/pregnancy. Primary treatment with NMV was the least costly approach, but it also yielded the fewest pregnancies. Primary treatment with PE was the least cost-effective strategy costing about $7300 CAD/pregnancy. Probabilistic sensitivity analysis reinforced MV as the most cost-effective strategy at a willingness-to-pay threshold of >$4100 CAD/pregnancy. MV yielded the most pregnancies at acceptable levels of incremental costs. As such, it is the preferred primary treatment strategy for varicocele-associated infertility. Treatment with PE was the least cost-effective approach and, as such, is best used only in cases of surgical failure.
A PDE Sensitivity Equation Method for Optimal Aerodynamic Design
NASA Technical Reports Server (NTRS)
Borggaard, Jeff; Burns, John
1996-01-01
The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.
A design solution to increasing the sensitivity of pMOS dosimeters: The stacked RADFET approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelleher, A.; Lane, W.; Adams, L.
1995-02-01
pMOS Radiation Sensitive Field Effect Transistors (RADFET`S) have applications as integrating dosimeters in laboratories and medicine to measure the amount of radiation dose absorbed. The suitability of these dosimeters to a certain application depends on the sensitivity of the RADFET being used. To date, this sensitivity is limited to the sensitivity of the gate oxide to radiation. The aim of this paper is to introduce a new design approach which will allow greater sensitivities to be achieved than is currently possible. An additional attractive feature of this design approach is that the sensitivity of the dosimeter may be changed dependingmore » on the total dose which is to be measured; essentially a dosimeter with auto-scaling may be achieved. This study introduces this autoscaling concept along with presenting the optimum RADFET device requirements which are necessary for this new design approach.« less
Multiscale Modeling and Uncertainty Quantification for Nuclear Fuel Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estep, Donald; El-Azab, Anter; Pernice, Michael
2017-03-23
In this project, we will address the challenges associated with constructing high fidelity multiscale models of nuclear fuel performance. We (*) propose a novel approach for coupling mesoscale and macroscale models, (*) devise efficient numerical methods for simulating the coupled system, and (*) devise and analyze effective numerical approaches for error and uncertainty quantification for the coupled multiscale system. As an integral part of the project, we will carry out analysis of the effects of upscaling and downscaling, investigate efficient methods for stochastic sensitivity analysis of the individual macroscale and mesoscale models, and carry out a posteriori error analysis formore » computed results. We will pursue development and implementation of solutions in software used at Idaho National Laboratories on models of interest to the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program.« less
Low-hazard metallography of moisture-sensitive electrochemical cells.
Wesolowski, D E; Rodriguez, M A; McKenzie, B B; Papenguth, H W
2011-08-01
A low-hazard approach is presented to prepare metallographic cross-sections of moisture-sensitive battery components. The approach is tailored for evaluation of thermal (molten salt) batteries composed of thin pressed-powder pellets, but has general applicability to other battery electrochemistries. Solution-cast polystyrene is used to encapsulate cells before embedding in epoxy. Nonaqueous grinding and polishing are performed in an industrial dry room to increase throughput. Lapping oil is used as a lubricant throughout grinding. Hexane is used as the solvent throughout processing; occupational exposure levels are well below the limits. Light optical and scanning electron microscopy on cross-sections are used to analyse a thermal battery cell. Spatially resolved X-ray diffraction on oblique angle cut cells complement the metallographic analysis. Published 2011. This article is a US Government work and is in the public domain in the USA.
Sensitivity of global terrestrial ecosystems to climate variability.
Seddon, Alistair W R; Macias-Fauria, Marc; Long, Peter R; Benz, David; Willis, Kathy J
2016-03-10
The identification of properties that contribute to the persistence and resilience of ecosystems despite climate change constitutes a research priority of global relevance. Here we present a novel, empirical approach to assess the relative sensitivity of ecosystems to climate variability, one property of resilience that builds on theoretical modelling work recognizing that systems closer to critical thresholds respond more sensitively to external perturbations. We develop a new metric, the vegetation sensitivity index, that identifies areas sensitive to climate variability over the past 14 years. The metric uses time series data derived from the moderate-resolution imaging spectroradiometer (MODIS) enhanced vegetation index, and three climatic variables that drive vegetation productivity (air temperature, water availability and cloud cover). Underlying the analysis is an autoregressive modelling approach used to identify climate drivers of vegetation productivity on monthly timescales, in addition to regions with memory effects and reduced response rates to external forcing. We find ecologically sensitive regions with amplified responses to climate variability in the Arctic tundra, parts of the boreal forest belt, the tropical rainforest, alpine regions worldwide, steppe and prairie regions of central Asia and North and South America, the Caatinga deciduous forest in eastern South America, and eastern areas of Australia. Our study provides a quantitative methodology for assessing the relative response rate of ecosystems--be they natural or with a strong anthropogenic signature--to environmental variability, which is the first step towards addressing why some regions appear to be more sensitive than others, and what impact this has on the resilience of ecosystem service provision and human well-being.
Sensitivity of global terrestrial ecosystems to climate variability
NASA Astrophysics Data System (ADS)
Seddon, Alistair W. R.; Macias-Fauria, Marc; Long, Peter R.; Benz, David; Willis, Kathy J.
2016-03-01
The identification of properties that contribute to the persistence and resilience of ecosystems despite climate change constitutes a research priority of global relevance. Here we present a novel, empirical approach to assess the relative sensitivity of ecosystems to climate variability, one property of resilience that builds on theoretical modelling work recognizing that systems closer to critical thresholds respond more sensitively to external perturbations. We develop a new metric, the vegetation sensitivity index, that identifies areas sensitive to climate variability over the past 14 years. The metric uses time series data derived from the moderate-resolution imaging spectroradiometer (MODIS) enhanced vegetation index, and three climatic variables that drive vegetation productivity (air temperature, water availability and cloud cover). Underlying the analysis is an autoregressive modelling approach used to identify climate drivers of vegetation productivity on monthly timescales, in addition to regions with memory effects and reduced response rates to external forcing. We find ecologically sensitive regions with amplified responses to climate variability in the Arctic tundra, parts of the boreal forest belt, the tropical rainforest, alpine regions worldwide, steppe and prairie regions of central Asia and North and South America, the Caatinga deciduous forest in eastern South America, and eastern areas of Australia. Our study provides a quantitative methodology for assessing the relative response rate of ecosystems—be they natural or with a strong anthropogenic signature—to environmental variability, which is the first step towards addressing why some regions appear to be more sensitive than others, and what impact this has on the resilience of ecosystem service provision and human well-being.
NASA Astrophysics Data System (ADS)
Dhiman, R.; Kalbar, P.; Inamdar, A. B.
2017-12-01
Coastal area classification in India is a challenge for federal and state government agencies due to fragile institutional framework, unclear directions in implementation of costal regulations and violations happening at private and government level. This work is an attempt to improvise the objectivity of existing classification methods to synergies the ecological systems and socioeconomic development in coastal cities. We developed a Geographic information system coupled Multi-criteria Decision Making (GIS-MCDM) approach to classify urban coastal areas where utility functions are used to transform the costal features into quantitative membership values after assessing the sensitivity of urban coastal ecosystem. Furthermore, these membership values for costal features are applied in different weighting schemes to derive Coastal Area Index (CAI) which classifies the coastal areas in four distinct categories viz. 1) No Development Zone, 2) Highly Sensitive Zone, 3) Moderately Sensitive Zone and 4) Low Sensitive Zone based on the sensitivity of urban coastal ecosystem. Mumbai, a coastal megacity in India is used as case study for demonstration of proposed method. Finally, uncertainty analysis using Monte Carlo approach to validate the sensitivity of CAI under specific multiple scenarios is carried out. Results of CAI method shows the clear demarcation of coastal areas in GIS environment based on the ecological sensitivity. CAI provides better decision support for federal and state level agencies to classify urban coastal areas according to the regional requirement of coastal resources considering resilience and sustainable development. CAI method will strengthen the existing institutional framework for decision making in classification of urban coastal areas where most effective coastal management options can be proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belov, Mikhail E.; Prasad, Satendra; Prior, David C.
2011-02-23
Liquid chromatography (LC)-triple quadrupole mass spectrometers operating in a Multiple Reaction Monitoring (MRM) mode are increasingly used for quantitative analysis of low abundance analytes in highly complex biochemical matrices. After development and selection of optimum MRM transition, sensitivity and data quality limitations are largely related to mass spectral peak interferences from sample or matrix constituents and statistical limitations at low number of ions reaching the detector. Herein, we report a new approach to enhancing MRM sensitivity by converting the continuous stream of ions from the ion source into a pulsed ion beam through the use of an Ion Funnel Trapmore » (IFT). Evaluation of the pulsed MRM approach was performed with a tryptic digest of Shewanella oneidensis strain MR-1 spiked with several reference peptides. The sensitivity improvement observed with the IFT coupled to the triple quadrupole instrument is based on several unique features. First, ion accumulation in the radio frequency (RF) ion trap facilitates improved droplet desolvation, which is manifested in the reduced background ion noise at the detector. Second, signal amplitude for a given transition is enhanced because of an order-of-magnitude increase in the ion charge density per unit time compared to a continuous mode of operation. Third, signal detection at the full duty cycle is obtained, as the trap use eliminates dead times between transitions, which are inevitable with continuous ion streams. In comparison with the conventional approach, the pulsed MRM signals showed up to 5-fold enhanced peak amplitude and 2-3 fold reduced chemical background, resulting in an improvement in the limit of detection (LOD) by a factor of ~ 4 to ~ 8.« less
Targeted Quantitation of Proteins by Mass Spectrometry
2013-01-01
Quantitative measurement of proteins is one of the most fundamental analytical tasks in a biochemistry laboratory, but widely used immunochemical methods often have limited specificity and high measurement variation. In this review, we discuss applications of multiple-reaction monitoring (MRM) mass spectrometry, which allows sensitive, precise quantitative analyses of peptides and the proteins from which they are derived. Systematic development of MRM assays is permitted by databases of peptide mass spectra and sequences, software tools for analysis design and data analysis, and rapid evolution of tandem mass spectrometer technology. Key advantages of MRM assays are the ability to target specific peptide sequences, including variants and modified forms, and the capacity for multiplexing that allows analysis of dozens to hundreds of peptides. Different quantitative standardization methods provide options that balance precision, sensitivity, and assay cost. Targeted protein quantitation by MRM and related mass spectrometry methods can advance biochemistry by transforming approaches to protein measurement. PMID:23517332
Targeted quantitation of proteins by mass spectrometry.
Liebler, Daniel C; Zimmerman, Lisa J
2013-06-04
Quantitative measurement of proteins is one of the most fundamental analytical tasks in a biochemistry laboratory, but widely used immunochemical methods often have limited specificity and high measurement variation. In this review, we discuss applications of multiple-reaction monitoring (MRM) mass spectrometry, which allows sensitive, precise quantitative analyses of peptides and the proteins from which they are derived. Systematic development of MRM assays is permitted by databases of peptide mass spectra and sequences, software tools for analysis design and data analysis, and rapid evolution of tandem mass spectrometer technology. Key advantages of MRM assays are the ability to target specific peptide sequences, including variants and modified forms, and the capacity for multiplexing that allows analysis of dozens to hundreds of peptides. Different quantitative standardization methods provide options that balance precision, sensitivity, and assay cost. Targeted protein quantitation by MRM and related mass spectrometry methods can advance biochemistry by transforming approaches to protein measurement.
An approach to improving the signal-to-optical-noise ratio of pulsed magnetic field photonic sensors
NASA Astrophysics Data System (ADS)
Wang, Jiang-ping; Li, Yu-quan
2008-12-01
During last years, interest in pulsed magnetic field sensors has widely increased. In fact, magnetic field measurement has a critical part in various scientific and technical areas. In order to research on pulsed magnetic field characteristic and corresponding measuring and defending means, a sensor with high immunity to electrical noise, high sensitivity, high accuracy and wide dynamic range is needed. The conventional magnetic field measurement system currently use active metallic probes which can disturb the measuring magnetic field and make sensor very sensitive to electromagnetic noise. Photonic magnetic field sensor exhibit great advantages with respect to the electronic ones: a very good galvanic insulation, high sensitivity and very wide bandwidth. Photonic sensing technology is fit for demand of a measure pulsed magnetic field. A type of pulsed magnetic field photonic sensor has been designed, analyzed, and tested. The cross polarization angle in photonic sensor effect on the signal-to-optical-noise ratio is theoretically analyzed in this paper. A novel approach for improving the signal-to-optical-noise ratio of pulsed magnetic field sensors was proposed. The experiments have proved that this approach is practical. The theoretical analysis and simulation results show that the signal-to-optical-noise ratio can potentially be considerably improved by setup suitable for the cross polarization angle.
Liwarska-Bizukojc, Ewa; Biernacki, Rafal
2010-10-01
In order to simulate biological wastewater treatment processes, data concerning wastewater and sludge composition, process kinetics and stoichiometry are required. Selection of the most sensitive parameters is an important step of model calibration. The aim of this work is to verify the predictability of the activated sludge model, which is implemented in BioWin software, and select its most influential kinetic and stoichiometric parameters with the help of sensitivity analysis approach. Two different measures of sensitivity are applied: the normalised sensitivity coefficient (S(i,j)) and the mean square sensitivity measure (delta(j)(msqr)). It occurs that 17 kinetic and stoichiometric parameters of the BioWin activated sludge (AS) model can be regarded as influential on the basis of S(i,j) calculations. Half of the influential parameters are associated with growth and decay of phosphorus accumulating organisms (PAOs). The identification of the set of the most sensitive parameters should support the users of this model and initiate the elaboration of determination procedures for the parameters, for which it has not been done yet. Copyright 2010 Elsevier Ltd. All rights reserved.
The FAQUIRE Approach: FAst, QUantitative, hIghly Resolved and sEnsitivity Enhanced 1H, 13C Data.
Farjon, Jonathan; Milande, Clément; Martineau, Estelle; Akoka, Serge; Giraudeau, Patrick
2018-02-06
The targeted analysis of metabolites in complex mixtures is a challenging issue. NMR is one of the major tools in this field, but there is a strong need for more sensitive, better-resolved, and faster quantitative methods. In this framework, we introduce the concept of FAst, QUantitative, hIghly Resolved and sEnsitivity enhanced (FAQUIRE) NMR to push forward the limits of metabolite NMR analysis. 2D 1 H, 13 C 2D quantitative maps are promising alternatives for enhancing the spectral resolution but are highly time-consuming because of (i) the intrinsic nature of 2D, (ii) the longer recycling times required for quantitative conditions, and (iii) the higher number of scans needed to reduce the level of detection/quantification to access low concentrated metabolites. To reach this aim, speeding up the recently developed QUantItative Perfected and pUre shifted HSQC (QUIPU HSQC) is an interesting attempt to develop the FAQUIRE concept. Thanks to the combination of spectral aliasing, nonuniform sampling, and variable repetition time, the acquisition time of 2D quantitative maps is reduced by a factor 6 to 9, while conserving a high spectral resolution thanks to a pure shift approach. The analytical potential of the new Quick QUIPU HSQC (Q QUIPU HSQC) is evaluated on a model metabolite sample, and its potential is shown on breast-cell extracts embedding metabolites at millimolar to submillimolar concentrations.
Advancing Cell Biology Through Proteomics in Space and Time (PROSPECTS)*
Lamond, Angus I.; Uhlen, Mathias; Horning, Stevan; Makarov, Alexander; Robinson, Carol V.; Serrano, Luis; Hartl, F. Ulrich; Baumeister, Wolfgang; Werenskiold, Anne Katrin; Andersen, Jens S.; Vorm, Ole; Linial, Michal; Aebersold, Ruedi; Mann, Matthias
2012-01-01
The term “proteomics” encompasses the large-scale detection and analysis of proteins and their post-translational modifications. Driven by major improvements in mass spectrometric instrumentation, methodology, and data analysis, the proteomics field has burgeoned in recent years. It now provides a range of sensitive and quantitative approaches for measuring protein structures and dynamics that promise to revolutionize our understanding of cell biology and molecular mechanisms in both human cells and model organisms. The Proteomics Specification in Time and Space (PROSPECTS) Network is a unique EU-funded project that brings together leading European research groups, spanning from instrumentation to biomedicine, in a collaborative five year initiative to develop new methods and applications for the functional analysis of cellular proteins. This special issue of Molecular and Cellular Proteomics presents 16 research papers reporting major recent progress by the PROSPECTS groups, including improvements to the resolution and sensitivity of the Orbitrap family of mass spectrometers, systematic detection of proteins using highly characterized antibody collections, and new methods for absolute as well as relative quantification of protein levels. Manuscripts in this issue exemplify approaches for performing quantitative measurements of cell proteomes and for studying their dynamic responses to perturbation, both during normal cellular responses and in disease mechanisms. Here we present a perspective on how the proteomics field is moving beyond simply identifying proteins with high sensitivity toward providing a powerful and versatile set of assay systems for characterizing proteome dynamics and thereby creating a new “third generation” proteomics strategy that offers an indispensible tool for cell biology and molecular medicine. PMID:22311636
High-frequency phase shift measurement greatly enhances the sensitivity of QCM immunosensors.
March, Carmen; García, José V; Sánchez, Ángel; Arnau, Antonio; Jiménez, Yolanda; García, Pablo; Manclús, Juan J; Montoya, Ángel
2015-03-15
In spite of being widely used for in liquid biosensing applications, sensitivity improvement of conventional (5-20MHz) quartz crystal microbalance (QCM) sensors remains an unsolved challenging task. With the help of a new electronic characterization approach based on phase change measurements at a constant fixed frequency, a highly sensitive and versatile high fundamental frequency (HFF) QCM immunosensor has successfully been developed and tested for its use in pesticide (carbaryl and thiabendazole) analysis. The analytical performance of several immunosensors was compared in competitive immunoassays taking carbaryl insecticide as the model analyte. The highest sensitivity was exhibited by the 100MHz HFF-QCM carbaryl immunosensor. When results were compared with those reported for 9MHz QCM, analytical parameters clearly showed an improvement of one order of magnitude for sensitivity (estimated as the I50 value) and two orders of magnitude for the limit of detection (LOD): 30μgl(-1) vs 0.66μgL(-1)I50 value and 11μgL(-1) vs 0.14μgL(-1) LOD, for 9 and 100MHz, respectively. For the fungicide thiabendazole, I50 value was roughly the same as that previously reported for SPR under the same biochemical conditions, whereas LOD improved by a factor of 2. The analytical performance achieved by high frequency QCM immunosensors surpassed those of conventional QCM and SPR, closely approaching the most sensitive ELISAs. The developed 100MHz QCM immunosensor strongly improves sensitivity in biosensing, and therefore can be considered as a very promising new analytical tool for in liquid applications where highly sensitive detection is required. Copyright © 2014 Elsevier B.V. All rights reserved.
The art of spacecraft design: A multidisciplinary challenge
NASA Technical Reports Server (NTRS)
Abdi, F.; Ide, H.; Levine, M.; Austel, L.
1989-01-01
Actual design turn-around time has become shorter due to the use of optimization techniques which have been introduced into the design process. It seems that what, how and when to use these optimization techniques may be the key factor for future aircraft engineering operations. Another important aspect of this technique is that complex physical phenomena can be modeled by a simple mathematical equation. The new powerful multilevel methodology reduces time-consuming analysis significantly while maintaining the coupling effects. This simultaneous analysis method stems from the implicit function theorem and system sensitivity derivatives of input variables. Use of the Taylor's series expansion and finite differencing technique for sensitivity derivatives in each discipline makes this approach unique for screening dominant variables from nondominant variables. In this study, the current Computational Fluid Dynamics (CFD) aerodynamic and sensitivity derivative/optimization techniques are applied for a simple cone-type forebody of a high-speed vehicle configuration to understand basic aerodynamic/structure interaction in a hypersonic flight condition.
Analysis of world terror networks from the reduced Google matrix of Wikipedia
NASA Astrophysics Data System (ADS)
El Zant, Samer; Frahm, Klaus M.; Jaffrès-Runser, Katia; Shepelyansky, Dima L.
2018-01-01
We apply the reduced Google matrix method to analyze interactions between 95 terrorist groups and determine their relationships and influence on 64 world countries. This is done on the basis of the Google matrix of the English Wikipedia (2017) composed of 5 416 537 articles which accumulate a great part of global human knowledge. The reduced Google matrix takes into account the direct and hidden links between a selection of 159 nodes (articles) appearing due to all paths of a random surfer moving over the whole network. As a result we obtain the network structure of terrorist groups and their relations with selected countries including hidden indirect links. Using the sensitivity of PageRank to a weight variation of specific links we determine the geopolitical sensitivity and influence of specific terrorist groups on world countries. The world maps of the sensitivity of various countries to influence of specific terrorist groups are obtained. We argue that this approach can find useful application for more extensive and detailed data bases analysis.
B-value and slip rate sensitivity analysis for PGA value in Lembang fault and Cimandiri fault area
NASA Astrophysics Data System (ADS)
Pratama, Cecep; Ito, Takeo; Meilano, Irwan; Nugraha, Andri Dian
2017-07-01
We examine slip rate and b-value contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedence in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi and Bandung using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Uncertainty and coefficient of variation from slip rate and b-value in Lembang and Cimandiri Fault area have been calculated. We observe that seismic hazard estimates are sensitive to fault slip rate and b-value with uncertainty result are 0.25 g dan 0.1-0.2 g, respectively. For specific site, we found seismic hazard estimate are 0.49 + 0.13 g with COV 27% and 0.39 + 0.05 g with COV 13% for Sukabumi and Bandung, respectively.
Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratama, Cecep, E-mail: great.pratama@gmail.com; Meilano, Irwan; Nugraha, Andri Dian
Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate formore » Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%.« less
Balsam, Joshua; Bruck, Hugh Alan; Kostov, Yordan; Rasooly, Avraham
2012-01-01
Optical technologies are important for biological analysis. Current biomedical optical analyses rely on high-cost, high-sensitivity optical detectors such as photomultipliers, avalanched photodiodes or cooled CCD cameras. In contrast, Webcams, mobile phones and other popular consumer electronics use lower-sensitivity, lower-cost optical components such as photodiodes or CMOS sensors. In order for consumer electronics devices, such as webcams, to be useful for biomedical analysis, they must have increased sensitivity. We combined two strategies to increase the sensitivity of CMOS-based fluorescence detector. We captured hundreds of low sensitivity images using a Webcam in video mode, instead of a single image typically used in cooled CCD devices.We then used a computational approach consisting of an image stacking algorithm to remove the noise by combining all of the images into a single image. While video mode is widely used for dynamic scene imaging (e.g. movies or time-lapse photography), it is not used to capture a single static image, which removes noise and increases sensitivity by more than thirty fold. The portable, battery-operated Webcam-based fluorometer system developed here consists of five modules: (1) a low cost CMOS Webcam to monitor light emission, (2) a plate to perform assays, (3) filters and multi-wavelength LED illuminator for fluorophore excitation, (4) a portable computer to acquire and analyze images, and (5) image stacking software for image enhancement. The samples consisted of various concentrations of fluorescein, ranging from 30 μM to 1000 μM, in a 36-well miniature plate. In the single frame mode, the fluorometer's limit-of-detection (LOD) for fluorescein is ∼1000 μM, which is relatively insensitive. However, when used in video mode combined with image stacking enhancement, the LOD is dramatically reduced to 30 μM, sensitivity which is similar to that of state-of-the-art ELISA plate photomultiplier-based readers. Numerous medical diagnostics assays rely on optical and fluorescence readers. Our novel combination of detection technologies, which is new to biodetection may enable the development of new low cost optical detectors based on an inexpensive Webcam (<$10). It has the potential to form the basis for high sensitivity, low cost medical diagnostics in resource-poor settings.
Balsam, Joshua; Bruck, Hugh Alan; Kostov, Yordan; Rasooly, Avraham
2013-01-01
Optical technologies are important for biological analysis. Current biomedical optical analyses rely on high-cost, high-sensitivity optical detectors such as photomultipliers, avalanched photodiodes or cooled CCD cameras. In contrast, Webcams, mobile phones and other popular consumer electronics use lower-sensitivity, lower-cost optical components such as photodiodes or CMOS sensors. In order for consumer electronics devices, such as webcams, to be useful for biomedical analysis, they must have increased sensitivity. We combined two strategies to increase the sensitivity of CMOS-based fluorescence detector. We captured hundreds of low sensitivity images using a Webcam in video mode, instead of a single image typically used in cooled CCD devices.We then used a computational approach consisting of an image stacking algorithm to remove the noise by combining all of the images into a single image. While video mode is widely used for dynamic scene imaging (e.g. movies or time-lapse photography), it is not used to capture a single static image, which removes noise and increases sensitivity by more than thirty fold. The portable, battery-operated Webcam-based fluorometer system developed here consists of five modules: (1) a low cost CMOS Webcam to monitor light emission, (2) a plate to perform assays, (3) filters and multi-wavelength LED illuminator for fluorophore excitation, (4) a portable computer to acquire and analyze images, and (5) image stacking software for image enhancement. The samples consisted of various concentrations of fluorescein, ranging from 30 μM to 1000 μM, in a 36-well miniature plate. In the single frame mode, the fluorometer's limit-of-detection (LOD) for fluorescein is ∼1000 μM, which is relatively insensitive. However, when used in video mode combined with image stacking enhancement, the LOD is dramatically reduced to 30 μM, sensitivity which is similar to that of state-of-the-art ELISA plate photomultiplier-based readers. Numerous medical diagnostics assays rely on optical and fluorescence readers. Our novel combination of detection technologies, which is new to biodetection may enable the development of new low cost optical detectors based on an inexpensive Webcam (<$10). It has the potential to form the basis for high sensitivity, low cost medical diagnostics in resource-poor settings. PMID:23990697
System and method for high precision isotope ratio destructive analysis
Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R
2013-07-02
A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).
Analyzing Responses of Chemical Sensor Arrays
NASA Technical Reports Server (NTRS)
Zhou, Hanying
2007-01-01
NASA is developing a third-generation electronic nose (ENose) capable of continuous monitoring of the International Space Station s cabin atmosphere for specific, harmful airborne contaminants. Previous generations of the ENose have been described in prior NASA Tech Briefs issues. Sensor selection is critical in both (prefabrication) sensor material selection and (post-fabrication) data analysis of the ENose, which detects several analytes that are difficult to detect, or that are at very low concentration ranges. Existing sensor selection approaches usually include limited statistical measures, where selectivity is more important but reliability and sensitivity are not of concern. When reliability and sensitivity can be major limiting factors in detecting target compounds reliably, the existing approach is not able to provide meaningful selection that will actually improve data analysis results. The approach and software reported here consider more statistical measures (factors) than existing approaches for a similar purpose. The result is a more balanced and robust sensor selection from a less than ideal sensor array. The software offers quick, flexible, optimal sensor selection and weighting for a variety of purposes without a time-consuming, iterative search by performing sensor calibrations to a known linear or nonlinear model, evaluating the individual sensor s statistics, scoring the individual sensor s overall performance, finding the best sensor array size to maximize class separation, finding optimal weights for the remaining sensor array, estimating limits of detection for the target compounds, evaluating fingerprint distance between group pairs, and finding the best event-detecting sensors.
An Evaluation of Transplacental Carcinogenesis for Human ...
Risk assessments take into account the sensitivity of the postnatal period to carcinogens through the application of age-dependent adjustment factors (ADAFs) (Barton et al. 2005). The prenatal period is also recognized to be sensitive but is typically not included into risk assessments (NRC, 2009). An analysis by California OEHHA (2008) contrasted prenatal, postnatal and adult sensitivity to 23 different carcinogens across 37 studies. That analysis found a wide range of transplacental sensitivity with some agents nearly 100 fold more potent in utero than in adults while others had an in utero/adult ratio adult only exposure). Five carcinogens had more modest ratios to adult potency in both pre- and postnatal testing (vinyl chloride, ethylnitroso biuret, 3-methylcholanthrene, urethane, diethylnitrosamine, 3-10 fold). Only one chemical showed a pre- vs postnatal divergence (butylnitrosourea, prenataladult). Based upon this limited set of genotoxic carcinogens, it appears that the prenatal period often has a sensitivity that approximates what has been found for postnatal, and the maternal system does not offer substantial protection against transplacental carcinogenesis in most cases. This suggests that the system of ADAFs developed for postnatal exposure may be considered for prenatal exposures as well. An alternative approach may be to calculate cancer risk for the period of pregnancy rather than blend this risk into the calculation of lifetime risk. This
Comparative peptidomics analysis of neural adaptations in rats repeatedly exposed to amphetamine.
Romanova, Elena V; Lee, Ji Eun; Kelleher, Neil L; Sweedler, Jonathan V; Gulley, Joshua M
2012-10-01
Repeated exposure to amphetamine (AMPH) induces long-lasting behavioral changes, referred to as sensitization, that are accompanied by various neuroadaptations in the brain. To investigate the chemical changes that occur during behavioral sensitization, we applied a comparative proteomics approach to screen for neuropeptide changes in a rodent model of AMPH-induced sensitization. By measuring peptide profiles with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry and comparing signal intensities using principal component analysis and variance statistics, subsets of peptides are found with significant differences in the dorsal striatum, nucleus accumbens, and medial prefrontal cortex of AMPH-sensitized male Sprague-Dawley rats. These biomarker peptides, identified in follow-up analyses using liquid chromatography and tandem mass spectrometry, suggest that behavioral sensitization to AMPH is associated with complex chemical adaptations that regulate energy/metabolism, neurotransmission, apoptosis, neuroprotection, and neuritogenesis, as well as cytoskeleton integrity and neuronal morphology. Our data contribute to a growing number of reports showing that in addition to the mesolimbic dopamine system, which is the best known signaling pathway involved with reinforcing the effect of psychostimulants, concomitant chemical changes in other pathways and in neuronal organization may play a part in the overall effect of chronic AMPH exposure on behavior. © 2012 The Authors Journal of Neurochemistry © 2012 International Society for Neurochemistry.
Global Sensitivity Analysis with Small Sample Sizes: Ordinary Least Squares Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Michael J.; Liu, Wei; Sivaramakrishnan, Raghu
2016-12-21
A new version of global sensitivity analysis is developed in this paper. This new version coupled with tools from statistics, machine learning, and optimization can devise small sample sizes that allow for the accurate ordering of sensitivity coefficients for the first 10-30 most sensitive chemical reactions in complex chemical-kinetic mechanisms, and is particularly useful for studying the chemistry in realistic devices. A key part of the paper is calibration of these small samples. Because these small sample sizes are developed for use in realistic combustion devices, the calibration is done over the ranges of conditions in such devices, with amore » test case being the operating conditions of a compression ignition engine studied earlier. Compression ignition engines operate under low-temperature combustion conditions with quite complicated chemistry making this calibration difficult, leading to the possibility of false positives and false negatives in the ordering of the reactions. So an important aspect of the paper is showing how to handle the trade-off between false positives and false negatives using ideas from the multiobjective optimization literature. The combination of the new global sensitivity method and the calibration are sample sizes a factor of approximately 10 times smaller than were available with our previous algorithm.« less
Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin
2015-09-02
The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds.
Culture-Independent Analysis of Probiotic Products by Denaturing Gradient Gel Electrophoresis
Temmerman, R.; Scheirlinck, I.; Huys, G.; Swings, J.
2003-01-01
In order to obtain functional and safe probiotic products for human consumption, fast and reliable quality control of these products is crucial. Currently, analysis of most probiotics is still based on culture-dependent methods involving the use of specific isolation media and identification of a limited number of isolates, which makes this approach relatively insensitive, laborious, and time-consuming. In this study, a collection of 10 probiotic products, including four dairy products, one fruit drink, and five freeze-dried products, were subjected to microbial analysis by using a culture-independent approach, and the results were compared with the results of a conventional culture-dependent analysis. The culture-independent approach involved extraction of total bacterial DNA directly from the product, PCR amplification of the V3 region of the 16S ribosomal DNA, and separation of the amplicons on a denaturing gradient gel. Digital capturing and processing of denaturing gradient gel electrophoresis (DGGE) band patterns allowed direct identification of the amplicons at the species level. This whole culture-independent approach can be performed in less than 30 h. Compared with culture-dependent analysis, the DGGE approach was found to have a much higher sensitivity for detection of microbial strains in probiotic products in a fast, reliable, and reproducible manner. Unfortunately, as reported in previous studies in which the culture-dependent approach was used, a rather high percentage of probiotic products suffered from incorrect labeling and yielded low bacterial counts, which may decrease their probiotic potential. PMID:12513998
Lu, Xin; Soto, Marcelo A; Thévenaz, Luc
2017-07-10
A method based on coherent Rayleigh scattering distinctly evaluating temperature and strain is proposed and experimentally demonstrated for distributed optical fiber sensing. Combining conventional phase-sensitive optical time-domain domain reflectometry (ϕOTDR) and ϕOTDR-based birefringence measurements, independent distributed temperature and strain profiles are obtained along a polarization-maintaining fiber. A theoretical analysis, supported by experimental data, indicates that the proposed system for temperature-strain discrimination is intrinsically better conditioned than an equivalent existing approach that combines classical Brillouin sensing with Brillouin dynamic gratings. This is due to the higher sensitivity of coherent Rayleigh scatting compared to Brillouin scattering, thus offering better performance and lower temperature-strain uncertainties in the discrimination. Compared to the Brillouin-based approach, the ϕOTDR-based system here proposed requires access to only one fiber-end, and a much simpler experimental layout. Experimental results validate the full discrimination of temperature and strain along a 100 m-long elliptical-core polarization-maintaining fiber with measurement uncertainties of ~40 mK and ~0.5 με, respectively. These values agree very well with the theoretically expected measurand resolutions.
NASA Technical Reports Server (NTRS)
Brown, James L.
2014-01-01
Examined is sensitivity of separation extent, wall pressure and heating to variation of primary input flow parameters, such as Mach and Reynolds numbers and shock strength, for 2D and Axisymmetric Hypersonic Shock Wave Turbulent Boundary Layer interactions obtained by Navier-Stokes methods using the SST turbulence model. Baseline parametric sensitivity response is provided in part by comparison with vetted experiments, and in part through updated correlations based on free interaction theory concepts. A recent database compilation of hypersonic 2D shock-wave/turbulent boundary layer experiments extensively used in a prior related uncertainty analysis provides the foundation for this updated correlation approach, as well as for more conventional validation. The primary CFD method for this work is DPLR, one of NASA's real-gas aerothermodynamic production RANS codes. Comparisons are also made with CFL3D, one of NASA's mature perfect-gas RANS codes. Deficiencies in predicted separation response of RANS/SST solutions to parametric variations of test conditions are summarized, along with recommendations as to future turbulence approach.
Fluorescence-based assay as a new screening tool for toxic chemicals
Moczko, Ewa; Mirkes, Evgeny M.; Cáceres, César; Gorban, Alexander N.; Piletsky, Sergey
2016-01-01
Our study involves development of fluorescent cell-based diagnostic assay as a new approach in high-throughput screening method. This highly sensitive optical assay operates similarly to e-noses and e-tongues which combine semi-specific sensors and multivariate data analysis for monitoring biochemical processes. The optical assay consists of a mixture of environmental-sensitive fluorescent dyes and human skin cells that generate fluorescence spectra patterns distinctive for particular physico-chemical and physiological conditions. Using chemometric techniques the optical signal is processed providing qualitative information about analytical characteristics of the samples. This integrated approach has been successfully applied (with sensitivity of 93% and specificity of 97%) in assessing whether particular chemical agents are irritating or not for human skin. It has several advantages compared with traditional biochemical or biological assays and can impact the new way of high-throughput screening and understanding cell activity. It also can provide reliable and reproducible method for assessing a risk of exposing people to different harmful substances, identification active compounds in toxicity screening and safety assessment of drugs, cosmetic or their specific ingredients. PMID:27653274
Fluorescence-based assay as a new screening tool for toxic chemicals.
Moczko, Ewa; Mirkes, Evgeny M; Cáceres, César; Gorban, Alexander N; Piletsky, Sergey
2016-09-22
Our study involves development of fluorescent cell-based diagnostic assay as a new approach in high-throughput screening method. This highly sensitive optical assay operates similarly to e-noses and e-tongues which combine semi-specific sensors and multivariate data analysis for monitoring biochemical processes. The optical assay consists of a mixture of environmental-sensitive fluorescent dyes and human skin cells that generate fluorescence spectra patterns distinctive for particular physico-chemical and physiological conditions. Using chemometric techniques the optical signal is processed providing qualitative information about analytical characteristics of the samples. This integrated approach has been successfully applied (with sensitivity of 93% and specificity of 97%) in assessing whether particular chemical agents are irritating or not for human skin. It has several advantages compared with traditional biochemical or biological assays and can impact the new way of high-throughput screening and understanding cell activity. It also can provide reliable and reproducible method for assessing a risk of exposing people to different harmful substances, identification active compounds in toxicity screening and safety assessment of drugs, cosmetic or their specific ingredients.
Fluorescence-based assay as a new screening tool for toxic chemicals
NASA Astrophysics Data System (ADS)
Moczko, Ewa; Mirkes, Evgeny M.; Cáceres, César; Gorban, Alexander N.; Piletsky, Sergey
2016-09-01
Our study involves development of fluorescent cell-based diagnostic assay as a new approach in high-throughput screening method. This highly sensitive optical assay operates similarly to e-noses and e-tongues which combine semi-specific sensors and multivariate data analysis for monitoring biochemical processes. The optical assay consists of a mixture of environmental-sensitive fluorescent dyes and human skin cells that generate fluorescence spectra patterns distinctive for particular physico-chemical and physiological conditions. Using chemometric techniques the optical signal is processed providing qualitative information about analytical characteristics of the samples. This integrated approach has been successfully applied (with sensitivity of 93% and specificity of 97%) in assessing whether particular chemical agents are irritating or not for human skin. It has several advantages compared with traditional biochemical or biological assays and can impact the new way of high-throughput screening and understanding cell activity. It also can provide reliable and reproducible method for assessing a risk of exposing people to different harmful substances, identification active compounds in toxicity screening and safety assessment of drugs, cosmetic or their specific ingredients.
Measuring Road Network Vulnerability with Sensitivity Analysis
Jun-qiang, Leng; Long-hai, Yang; Liu, Wei-yi; Zhao, Lin
2017-01-01
This paper focuses on the development of a method for road network vulnerability analysis, from the perspective of capacity degradation, which seeks to identify the critical infrastructures in the road network and the operational performance of the whole traffic system. This research involves defining the traffic utility index and modeling vulnerability of road segment, route, OD (Origin Destination) pair and road network. Meanwhile, sensitivity analysis method is utilized to calculate the change of traffic utility index due to capacity degradation. This method, compared to traditional traffic assignment, can improve calculation efficiency and make the application of vulnerability analysis to large actual road network possible. Finally, all the above models and calculation method is applied to actual road network evaluation to verify its efficiency and utility. This approach can be used as a decision-supporting tool for evaluating the performance of road network and identifying critical infrastructures in transportation planning and management, especially in the resource allocation for mitigation and recovery. PMID:28125706
Motif-based analysis of large nucleotide data sets using MEME-ChIP
Ma, Wenxiu; Noble, William S; Bailey, Timothy L
2014-01-01
MEME-ChIP is a web-based tool for analyzing motifs in large DNA or RNA data sets. It can analyze peak regions identified by ChIP-seq, cross-linking sites identified by cLIP-seq and related assays, as well as sets of genomic regions selected using other criteria. MEME-ChIP performs de novo motif discovery, motif enrichment analysis, motif location analysis and motif clustering, providing a comprehensive picture of the DNA or RNA motifs that are enriched in the input sequences. MEME-ChIP performs two complementary types of de novo motif discovery: weight matrix–based discovery for high accuracy; and word-based discovery for high sensitivity. Motif enrichment analysis using DNA or RNA motifs from human, mouse, worm, fly and other model organisms provides even greater sensitivity. MEME-ChIP’s interactive HTML output groups and aligns significant motifs to ease interpretation. this protocol takes less than 3 h, and it provides motif discovery approaches that are distinct and complementary to other online methods. PMID:24853928
Safta, C.; Ricciuto, Daniel M.; Sargsyan, Khachik; ...
2015-07-01
In this paper we propose a probabilistic framework for an uncertainty quantification (UQ) study of a carbon cycle model and focus on the comparison between steady-state and transient simulation setups. A global sensitivity analysis (GSA) study indicates the parameters and parameter couplings that are important at different times of the year for quantities of interest (QoIs) obtained with the data assimilation linked ecosystem carbon (DALEC) model. We then employ a Bayesian approach and a statistical model error term to calibrate the parameters of DALEC using net ecosystem exchange (NEE) observations at the Harvard Forest site. The calibration results are employedmore » in the second part of the paper to assess the predictive skill of the model via posterior predictive checks.« less
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Cohen, Gerald A.; Mroz, Zenon
1990-01-01
A uniform variational approach to sensitivity analysis of vibration frequencies and bifurcation loads of nonlinear structures is developed. Two methods of calculating the sensitivities of bifurcation buckling loads and vibration frequencies of nonlinear structures, with respect to stiffness and initial strain parameters, are presented. A direct method requires calculation of derivatives of the prebuckling state with respect to these parameters. An adjoint method bypasses the need for these derivatives by using instead the strain field associated with the second-order postbuckling state. An operator notation is used and the derivation is based on the principle of virtual work. The derivative computations are easily implemented in structural analysis programs. This is demonstrated by examples using a general purpose, finite element program and a shell-of-revolution program.
Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities
NASA Astrophysics Data System (ADS)
Esposito, Gaetano
Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and identifying sources of uncertainty affecting relevant reaction pathways are usually addressed by resorting to Global Sensitivity Analysis (GSA) techniques. In particular, the most sensitive reactions controlling combustion phenomena are first identified using the Morris Method and then analyzed under the Random Sampling -- High Dimensional Model Representation (RS-HDMR) framework. The HDMR decomposition shows that 10% of the variance seen in the extinction strain rate of non-premixed flames is due to second-order effects between parameters, whereas the maximum concentration of acetylene, a key soot precursor, is affected by mostly only first-order contributions. Moreover, the analysis of the global sensitivity indices demonstrates that improving the accuracy of the reaction rates including the vinyl radical, C2H3, can drastically reduce the uncertainty of predicting targeted flame properties. Finally, the back-propagation of the experimental uncertainty of the extinction strain rate to the parameter space is also performed. This exercise, achieved by recycling the numerical solutions of the RS-HDMR, shows that some regions of the parameter space have a high probability of reproducing the experimental value of the extinction strain rate between its own uncertainty bounds. Therefore this study demonstrates that the uncertainty analysis of bulk flame properties can effectively provide information on relevant chemical reactions.
Harper, Marc; Gronenberg, Luisa; Liao, James; Lee, Christopher
2014-01-01
Discovering all the genetic causes of a phenotype is an important goal in functional genomics. We combine an experimental design for detecting independent genetic causes of a phenotype with a high-throughput sequencing analysis that maximizes sensitivity for comprehensively identifying them. Testing this approach on a set of 24 mutant strains generated for a metabolic phenotype with many known genetic causes, we show that this pathway-based phenotype sequencing analysis greatly improves sensitivity of detection compared with previous methods, and reveals a wide range of pathways that can cause this phenotype. We demonstrate our approach on a metabolic re-engineering phenotype, the PEP/OAA metabolic node in E. coli, which is crucial to a substantial number of metabolic pathways and under renewed interest for biofuel research. Out of 2157 mutations in these strains, pathway-phenoseq discriminated just five gene groups (12 genes) as statistically significant causes of the phenotype. Experimentally, these five gene groups, and the next two high-scoring pathway-phenoseq groups, either have a clear connection to the PEP metabolite level or offer an alternative path of producing oxaloacetate (OAA), and thus clearly explain the phenotype. These high-scoring gene groups also show strong evidence of positive selection pressure, compared with strictly neutral selection in the rest of the genome.
NASA Astrophysics Data System (ADS)
Morschheuser, Lena; Wessels, Hauke; Pille, Christina; Fischer, Judith; Hünniger, Tim; Fischer, Markus; Paschke-Kratzin, Angelika; Rohn, Sascha
2016-05-01
Protein analysis using high-performance thin-layer chromatography (HPTLC) is not commonly used but can complement traditional electrophoretic and mass spectrometric approaches in a unique way. Due to various detection protocols and possibilities for hyphenation, HPTLC protein analysis is a promising alternative for e.g., investigating posttranslational modifications. This study exemplarily focused on the investigation of lysozyme, an enzyme which is occurring in eggs and technologically added to foods and beverages such as wine. The detection of lysozyme is mandatory, as it might trigger allergenic reactions in sensitive individuals. To underline the advantages of HPTLC in protein analysis, the development of innovative, highly specific staining protocols leads to improved sensitivity for protein detection on HPTLC plates in comparison to universal protein derivatization reagents. This study aimed at developing a detection methodology for HPTLC separated proteins using aptamers. Due to their affinity and specificity towards a wide range of targets, an aptamer based staining procedure on HPTLC (HPTLC-aptastaining) will enable manifold analytical possibilities. Besides the proof of its applicability for the very first time, (i) aptamer-based staining of proteins is applicable on different stationary phase materials and (ii) furthermore, it can be used as an approach for a semi-quantitative estimation of protein concentrations.
Devpura, Suneetha; Pattamadilok, Bensachee; Syed, Zain U.; Vemulapalli, Pranita; Henderson, Marsha; Rehse, Steven J.; Hamzavi, Iltefat; Lim, Henry W.; Naik, Ratna
2011-01-01
Quantification of skin changes due to acanthosis nigricans (AN), a disorder common among insulin-resistant diabetic and obese individuals, was investigated using two optical techniques: diffuse reflectance spectroscopy (DRS) and colorimetry. Measurements were obtained from AN lesions on the neck and two control sites of eight AN patients. A principal component/discriminant function analysis successfully differentiated between AN lesion and normal skin with 87.7% sensitivity and 94.8% specificity in DRS measurements and 97.2% sensitivity and 96.4% specificity in colorimetry measurements. PMID:21698027
Zhang, Yong; Zhou, An; Xie, Xiao-Mei
2013-03-01
A simple and sensitive method has been developed to simultaneously determine betunilic acid, oleanolic acid and ursolic acid in the fruits of Ziziphus jujuba from different regions by HPLC-MS. This HPLC assay was performed on PAH polymeric C18 bonded stationary phase column with mobile phase contained acetonitrile-water (90: 10) and with negative ESI detection mode. The developed approach was characterized by short time consumption for chromatographic separation, high sensitivity and good reliability so as to meet the requirements for rapid analysis of large-batch fruits of Z. jujuba from different habitats.
Neuroscience and approach/avoidance personality traits: a two stage (valuation-motivation) approach.
Corr, Philip J; McNaughton, Neil
2012-11-01
Many personality theories link specific traits to the sensitivities of the neural systems that control approach and avoidance. But there is no consensus on the nature of these systems. Here we combine recent advances in economics and neuroscience to provide a more solid foundation for a neuroscience of approach/avoidance personality. We propose a two-stage integration of valuation (loss/gain) sensitivities with motivational (approach/avoidance/conflict) sensitivities. Our key conclusions are: (1) that valuation of appetitive and aversive events (e.g. gain and loss as studied by behavioural economists) is an independent perceptual input stage--with the economic phenomenon of loss aversion resulting from greater negative valuation sensitivity compared to positive valuation sensitivity; (2) that valuation of an appetitive stimulus then interacts with a contingency of presentation or omission to generate a motivational 'attractor' or 'repulsor', respectively (vice versa for an aversive stimulus); (3) the resultant behavioural tendencies to approach or avoid have distinct sensitivities to those of the valuation systems; (4) while attractors and repulsors can reinforce new responses they also, more usually, elicit innate or previously conditioned responses and so the perception/valuation-motivation/action complex is best characterised as acting as a 'reinforcer' not a 'reinforcement'; and (5) approach-avoidance conflict must be viewed as activating a third motivation system that is distinct from the basic approach and avoidance systems. We provide examples of methods of assessing each of the constructs within approach-avoidance theories and of linking these constructs to personality measures. We sketch a preliminary five-element reinforcer sensitivity theory (RST-5) as a first step in the integration of existing specific approach-avoidance theories into a coherent neuroscience of personality. Copyright © 2012 Elsevier Ltd. All rights reserved.
On selecting a prior for the precision parameter of Dirichlet process mixture models
Dorazio, R.M.
2009-01-01
In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.
Zervantonakis, Ioannis K; Iavarone, Claudia; Chen, Hsing-Yu; Selfors, Laura M; Palakurthi, Sangeetha; Liu, Joyce F; Drapkin, Ronny; Matulonis, Ursula; Leverson, Joel D; Sampath, Deepak; Mills, Gordon B; Brugge, Joan S
2017-08-28
The lack of effective chemotherapies for high-grade serous ovarian cancers (HGS-OvCa) has motivated a search for alternative treatment strategies. Here, we present an unbiased systems-approach to interrogate a panel of 14 well-annotated HGS-OvCa patient-derived xenografts for sensitivity to PI3K and PI3K/mTOR inhibitors and uncover cell death vulnerabilities. Proteomic analysis reveals that PI3K/mTOR inhibition in HGS-OvCa patient-derived xenografts induces both pro-apoptotic and anti-apoptotic signaling responses that limit cell killing, but also primes cells for inhibitors of anti-apoptotic proteins. In-depth quantitative analysis of BCL-2 family proteins and other apoptotic regulators, together with computational modeling and selective anti-apoptotic protein inhibitors, uncovers new mechanistic details about apoptotic regulators that are predictive of drug sensitivity (BIM, caspase-3, BCL-X L ) and resistance (MCL-1, XIAP). Our systems-approach presents a strategy for systematic analysis of the mechanisms that limit effective tumor cell killing and the identification of apoptotic vulnerabilities to overcome drug resistance in ovarian and other cancers.High-grade serous ovarian cancers (HGS-OvCa) frequently develop chemotherapy resistance. Here, the authors through a systematic analysis of proteomic and drug response data of 14 HGS-OvCa PDXs demonstrate that targeting apoptosis regulators can improve response of these tumors to inhibitors of the PI3K/mTOR pathway.
Saingam, Prakit; Li, Bo; Yan, Tao
2018-06-01
DNA-based molecular detection of microbial pathogens in complex environments is still plagued by sensitivity, specificity and robustness issues. We propose to address these issues by viewing them as inadvertent consequences of requiring specific and adequate amplification (SAA) of target DNA molecules by current PCR methods. Using the invA gene of Salmonella as the model system, we investigated if next generation sequencing (NGS) can be used to directly detect target sequences in false-negative PCR reaction (PCR-NGS) in order to remove the SAA requirement from PCR. False-negative PCR and qPCR reactions were first created using serial dilutions of laboratory-prepared Salmonella genomic DNA and then analyzed directly by NGS. Target invA sequences were detected in all false-negative PCR and qPCR reactions, which lowered the method detection limits near the theoretical minimum of single gene copy detection. The capability of the PCR-NGS approach in correcting false negativity was further tested and confirmed under more environmentally relevant conditions using Salmonella-spiked stream water and sediment samples. Finally, the PCR-NGS approach was applied to ten urban stream water samples and detected invA sequences in eight samples that would be otherwise deemed Salmonella negative. Analysis of the non-target sequences in the false-negative reactions helped to identify primer dime-like short sequences as the main cause of the false negativity. Together, the results demonstrated that the PCR-NGS approach can significantly improve method sensitivity, correct false-negative detections, and enable sequence-based analysis for failure diagnostics in complex environmental samples. Copyright © 2018 Elsevier B.V. All rights reserved.
Estimating the safety benefits of context sensitive solutions.
DOT National Transportation Integrated Search
2011-11-01
Context Sensitive Solutions (CSS), also commonly known by the original name Context Sensitive Design : (CSD), is an alternative approach to the conventional transportation-oriented decision-making and design : processes. The CSS approach can be used ...
Marschner, C B; Kokla, M; Amigo, J M; Rozanski, E A; Wiinberg, B; McEvoy, F J
2017-07-11
Diagnosis of pulmonary thromboembolism (PTE) in dogs relies on computed tomography pulmonary angiography (CTPA), but detailed interpretation of CTPA images is demanding for the radiologist and only large vessels may be evaluated. New approaches for better detection of smaller thrombi include dual energy computed tomography (DECT) as well as computer assisted diagnosis (CAD) techniques. The purpose of this study was to investigate the performance of quantitative texture analysis for detecting dogs with PTE using grey-level co-occurrence matrices (GLCM) and multivariate statistical classification analyses. CT images from healthy (n = 6) and diseased (n = 29) dogs with and without PTE confirmed on CTPA were segmented so that only tissue with CT numbers between -1024 and -250 Houndsfield Units (HU) was preserved. GLCM analysis and subsequent multivariate classification analyses were performed on texture parameters extracted from these images. Leave-one-dog-out cross validation and receiver operator characteristic (ROC) showed that the models generated from the texture analysis were able to predict healthy dogs with optimal levels of performance. Partial Least Square Discriminant Analysis (PLS-DA) obtained a sensitivity of 94% and a specificity of 96%, while Support Vector Machines (SVM) yielded a sensitivity of 99% and a specificity of 100%. The models, however, performed worse in classifying the type of disease in the diseased dog group: In diseased dogs with PTE sensitivities were 30% (PLS-DA) and 38% (SVM), and specificities were 80% (PLS-DA) and 89% (SVM). In diseased dogs without PTE the sensitivities of the models were 59% (PLS-DA) and 79% (SVM) and specificities were 79% (PLS-DA) and 82% (SVM). The results indicate that texture analysis of CTPA images using GLCM is an effective tool for distinguishing healthy from abnormal lung. Furthermore the texture of pulmonary parenchyma in dogs with PTE is altered, when compared to the texture of pulmonary parenchyma of healthy dogs. The models' poorer performance in classifying dogs within the diseased group, may be related to the low number of dogs compared to texture variables, a lack of balanced number of dogs within each group or a real lack of difference in the texture features among the diseased dogs.
A multimodal spectral approach to characterize rhythm in natural speech.
Alexandrou, Anna Maria; Saarinen, Timo; Kujala, Jan; Salmelin, Riitta
2016-01-01
Human utterances demonstrate temporal patterning, also referred to as rhythm. While simple oromotor behaviors (e.g., chewing) feature a salient periodical structure, conversational speech displays a time-varying quasi-rhythmic pattern. Quantification of periodicity in speech is challenging. Unimodal spectral approaches have highlighted rhythmic aspects of speech. However, speech is a complex multimodal phenomenon that arises from the interplay of articulatory, respiratory, and vocal systems. The present study addressed the question of whether a multimodal spectral approach, in the form of coherence analysis between electromyographic (EMG) and acoustic signals, would allow one to characterize rhythm in natural speech more efficiently than a unimodal analysis. The main experimental task consisted of speech production at three speaking rates; a simple oromotor task served as control. The EMG-acoustic coherence emerged as a sensitive means of tracking speech rhythm, whereas spectral analysis of either EMG or acoustic amplitude envelope alone was less informative. Coherence metrics seem to distinguish and highlight rhythmic structure in natural speech.
The importance of proving the null.
Gallistel, C R
2009-04-01
Null hypotheses are simple, precise, and theoretically important. Conventional statistical analysis cannot support them; Bayesian analysis can. The challenge in a Bayesian analysis is to formulate a suitably vague alternative, because the vaguer the alternative is (the more it spreads out the unit mass of prior probability), the more the null is favored. A general solution is a sensitivity analysis: Compute the odds for or against the null as a function of the limit(s) on the vagueness of the alternative. If the odds on the null approach 1 from above as the hypothesized maximum size of the possible effect approaches 0, then the data favor the null over any vaguer alternative to it. The simple computations and the intuitive graphic representation of the analysis are illustrated by the analysis of diverse examples from the current literature. They pose 3 common experimental questions: (a) Are 2 means the same? (b) Is performance at chance? (c) Are factors additive? (c) 2009 APA, all rights reserved
Roberts, David W; Patlewicz, Grace
2018-01-01
There is an expectation that to meet regulatory requirements, and avoid or minimize animal testing, integrated approaches to testing and assessment will be needed that rely on assays representing key events (KEs) in the skin sensitization adverse outcome pathway. Three non-animal assays have been formally validated and regulatory adopted: the direct peptide reactivity assay (DPRA), the KeratinoSens™ assay and the human cell line activation test (h-CLAT). There have been many efforts to develop integrated approaches to testing and assessment with the "two out of three" approach attracting much attention. Here a set of 271 chemicals with mouse, human and non-animal sensitization test data was evaluated to compare the predictive performances of the three individual non-animal assays, their binary combinations and the "two out of three" approach in predicting skin sensitization potential. The most predictive approach was to use both the DPRA and h-CLAT as follows: (1) perform DPRA - if positive, classify as sensitizing, and (2) if negative, perform h-CLAT - a positive outcome denotes a sensitizer, a negative, a non-sensitizer. With this approach, 85% (local lymph node assay) and 93% (human) of non-sensitizer predictions were correct, whereas the "two out of three" approach had 69% (local lymph node assay) and 79% (human) of non-sensitizer predictions correct. The findings are consistent with the argument, supported by published quantitative mechanistic models that only the first KE needs to be modeled. All three assays model this KE to an extent. The value of using more than one assay depends on how the different assays compensate for each other's technical limitations. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Hanck, Sarah E; Blankenship, Kim M; Irwin, Kevin S; West, Brooke S; Kershaw, Trace
2008-05-01
The accuracy of behavioral data related to risk for HIV and other sexually transmitted infections is prone to misreporting because of social desirability effects. Because computer-assisted approaches are not always feasible, a noncomputerized interview method for reducing social desirability effects is needed. The previous performance of alternative methods has been limited to aggregate data or constrained by the simplicity of dichotomous-only responses. We designed and tested a "polling box" method for case-attributable, multiple-response survey items in a low literacy population. A cross-sectional survey was conducted with 812 female sex workers in Andhra Pradesh, India. For a subset of questions embedded in a face-to-face survey questionnaire, every third participant was provided graphical response cards upon which to mark their answer and place in a polling box outside the view of the interviewer. Multiple logistic regression analysis was used to test for response differences to questions about socially undesirable, socially desirable, or sensitivity-neutral behaviors in the 2 interview methods. Polling box participants demonstrated higher reporting of risky sexual behaviors and lower reporting of condom use, with no conclusive response patterns among sensitivity-neutral items. Our findings suggest that the polling box approach provides a promising technique for improving the accurate reporting of sensitive behaviors among a low-literacy population in a resource poor setting. Additional research is needed to test logistical adaptations of the polling box approach.
Parametric sensitivity analysis of an agro-economic model of management of irrigation water
NASA Astrophysics Data System (ADS)
El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse
2015-04-01
The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.
Robot-assisted versus open sacrocolpopexy: a cost-minimization analysis.
Elliott, Christopher S; Hsieh, Michael H; Sokol, Eric R; Comiter, Craig V; Payne, Christopher K; Chen, Bertha
2012-02-01
Abdominal sacrocolpopexy is considered a standard of care operation for apical vaginal vault prolapse repair. Using outcomes at our center we evaluated whether the robotic approach to sacrocolpopexy is as cost-effective as the open approach. After obtaining institutional review board approval we performed cost-minimization analysis in a retrospective cohort of patients who underwent sacrocolpopexy at our institution between 2006 and 2010. Threshold values, that is model variable values at which the most cost effective approach crosses over to an alternative approach, were determined by testing model variables over realistic ranges using sensitivity analysis. Hospital billing data were also evaluated to confirm our findings. Operative time was similar for robotic and open surgery (226 vs 221 minutes) but postoperative length of stay differed significantly (1.0 vs 3.3 days, p <0.001). Base case analysis revealed an overall 10% cost savings for robot-assisted vs open sacrocolpopexy ($10,178 vs $11,307). Tornado analysis suggested that the number of institutional robotic cases done annually, length of stay and cost per hospitalization day in the postoperative period were the largest drivers of cost. Analysis of our hospital billing data showed a similar trend with robotic surgery costing 4.2% less than open surgery. A robot-assisted approach to sacrocolpopexy can be equally or less costly than an open approach. This depends on a sufficient institutional robotic case volume and a shorter postoperative stay for patients who undergo the robot-assisted procedure. Copyright © 2012 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R
2013-01-01
Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.
McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.
2013-01-01
Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817
A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models
NASA Astrophysics Data System (ADS)
Brugnach, M.; Neilson, R.; Bolte, J.
2001-12-01
The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in the output are identified, the causes of its variability can be found. Some of the advantages of this approach are that it reduces the dimensionality of the search space, it facilitates the interpretation of the results and it provides information that allows exploration of uncertainty at the process level, and how it might affect model output. We present an example using the vegetation model BIOME-BGC.
Ebrahim, Shanil; Johnston, Bradley C; Akl, Elie A; Mustafa, Reem A; Sun, Xin; Walter, Stephen D; Heels-Ansdell, Diane; Alonso-Coello, Pablo; Guyatt, Gordon H
2014-05-01
We previously developed an approach to address the impact of missing participant data in meta-analyses of continuous variables in trials that used the same measurement instrument. We extend this approach to meta-analyses including trials that use different instruments to measure the same construct. We reviewed the available literature, conducted an iterative consultative process, and developed an approach involving a complete-case analysis complemented by sensitivity analyses that apply a series of increasingly stringent assumptions about results in patients with missing continuous outcome data. Our approach involves choosing the reference measurement instrument; converting scores from different instruments to the units of the reference instrument; developing four successively more stringent imputation strategies for addressing missing participant data; calculating a pooled mean difference for the complete-case analysis and imputation strategies; calculating the proportion of patients who experienced an important treatment effect; and judging the impact of the imputation strategies on the confidence in the estimate of effect. We applied our approach to an example systematic review of respiratory rehabilitation for chronic obstructive pulmonary disease. Our extended approach provides quantitative guidance for addressing missing participant data in systematic reviews of trials using different instruments to measure the same construct. Copyright © 2014 Elsevier Inc. All rights reserved.
Xiang, Mei-Hao; Liu, Jin-Wen; Li, Na; Tang, Hao; Yu, Ru-Qin; Jiang, Jian-Hui
2016-02-28
Graphitic C3N4 (g-C3N4) nanosheets provide an attractive option for bioprobes and bioimaging applications. Utilizing highly fluorescent and water-dispersible ultrathin g-C3N4 nanosheets, a highly sensitive, selective and label-free biosensor has been developed for ALP detection for the first time. The developed approach utilizes a natural substrate of ALP in biological systems and thus affords very high catalytic efficiency. This novel biosensor is demonstrated to enable quantitative analysis of ALP in a wide range from 0.1 to 1000 U L(-1) with a low detection limit of 0.08 U L(-1), which is among the most sensitive assays for ALP. It is expected that the developed method may provide a low-cost, convenient, rapid and highly sensitive platform for ALP-based clinical diagnostics and biomedical applications.
Savas, Jeffrey N.; De Wit, Joris; Comoletti, Davide; Zemla, Roland; Ghosh, Anirvan
2015-01-01
Ligand-receptor interactions represent essential biological triggers which regulate many diverse and important cellular processes. We have developed a discovery-based proteomic biochemical protocol which couples affinity purification with multidimensional liquid chromatographic tandem mass spectrometry (LCLC-MS/MS) and bioinformatic analysis. Compared to previous approaches, our analysis increases sensitivity, shortens analysis duration, and boosts comprehensiveness. In this protocol, receptor extracellular domains are fused with the Fc region of IgG to generate fusion proteins that are purified from transfected HEK293T cells. These “ecto-Fcs” are coupled to protein A beads and serve as baits for binding assays with prey proteins extracted from rodent brain. After capture, the affinity purified proteins are digested into peptides and comprehensively analyzed by LCLC-MS/MS with ion trap mass spectrometers. In four working days, this protocol can generate shortlists of candidate ligand-receptor protein-protein interactions. Our “Ecto-Fc MS” approach outperforms antibody-based approaches and provides a reproducible and robust framework to identify extracellular ligand – receptor interactions. PMID:25101821
Gorzalczany, Marian B; Rudzinski, Filip
2017-06-07
This paper presents a generalization of self-organizing maps with 1-D neighborhoods (neuron chains) that can be effectively applied to complex cluster analysis problems. The essence of the generalization consists in introducing mechanisms that allow the neuron chain--during learning--to disconnect into subchains, to reconnect some of the subchains again, and to dynamically regulate the overall number of neurons in the system. These features enable the network--working in a fully unsupervised way (i.e., using unlabeled data without a predefined number of clusters)--to automatically generate collections of multiprototypes that are able to represent a broad range of clusters in data sets. First, the operation of the proposed approach is illustrated on some synthetic data sets. Then, this technique is tested using several real-life, complex, and multidimensional benchmark data sets available from the University of California at Irvine (UCI) Machine Learning repository and the Knowledge Extraction based on Evolutionary Learning data set repository. A sensitivity analysis of our approach to changes in control parameters and a comparative analysis with an alternative approach are also performed.
MacLean, Adam L; Harrington, Heather A; Stumpf, Michael P H; Byrne, Helen M
2016-01-01
The last decade has seen an explosion in models that describe phenomena in systems medicine. Such models are especially useful for studying signaling pathways, such as the Wnt pathway. In this chapter we use the Wnt pathway to showcase current mathematical and statistical techniques that enable modelers to gain insight into (models of) gene regulation and generate testable predictions. We introduce a range of modeling frameworks, but focus on ordinary differential equation (ODE) models since they remain the most widely used approach in systems biology and medicine and continue to offer great potential. We present methods for the analysis of a single model, comprising applications of standard dynamical systems approaches such as nondimensionalization, steady state, asymptotic and sensitivity analysis, and more recent statistical and algebraic approaches to compare models with data. We present parameter estimation and model comparison techniques, focusing on Bayesian analysis and coplanarity via algebraic geometry. Our intention is that this (non-exhaustive) review may serve as a useful starting point for the analysis of models in systems medicine.
The enhanced cyan fluorescent protein: a sensitive pH sensor for fluorescence lifetime imaging.
Poëa-Guyon, Sandrine; Pasquier, Hélène; Mérola, Fabienne; Morel, Nicolas; Erard, Marie
2013-05-01
pH is an important parameter that affects many functions of live cells, from protein structure or function to several crucial steps of their metabolism. Genetically encoded pH sensors based on pH-sensitive fluorescent proteins have been developed and used to monitor the pH of intracellular compartments. The quantitative analysis of pH variations can be performed either by ratiometric or fluorescence lifetime detection. However, most available genetically encoded pH sensors are based on green and yellow fluorescent proteins and are not compatible with multicolor approaches. Taking advantage of the strong pH sensitivity of enhanced cyan fluorescent protein (ECFP), we demonstrate here its suitability as a sensitive pH sensor using fluorescence lifetime imaging. The intracellular ECFP lifetime undergoes large changes (32 %) in the pH 5 to pH 7 range, which allows accurate pH measurements to better than 0.2 pH units. By fusion of ECFP with the granular chromogranin A, we successfully measured the pH in secretory granules of PC12 cells, and we performed a kinetic analysis of intragranular pH variations in living cells exposed to ammonium chloride.
Ezendam, Janine; Braakhuis, Hedwig M; Vandebriel, Rob J
2016-12-01
The hazard assessment of skin sensitizers relies mainly on animal testing, but much progress is made in the development, validation and regulatory acceptance and implementation of non-animal predictive approaches. In this review, we provide an update on the available computational tools and animal-free test methods for the prediction of skin sensitization hazard. These individual test methods address mostly one mechanistic step of the process of skin sensitization induction. The adverse outcome pathway (AOP) for skin sensitization describes the key events (KEs) that lead to skin sensitization. In our review, we have clustered the available test methods according to the KE they inform: the molecular initiating event (MIE/KE1)-protein binding, KE2-keratinocyte activation, KE3-dendritic cell activation and KE4-T cell activation and proliferation. In recent years, most progress has been made in the development and validation of in vitro assays that address KE2 and KE3. No standardized in vitro assays for T cell activation are available; thus, KE4 cannot be measured in vitro. Three non-animal test methods, addressing either the MIE, KE2 or KE3, are accepted as OECD test guidelines, and this has accelerated the development of integrated or defined approaches for testing and assessment (e.g. testing strategies). The majority of these approaches are mechanism-based, since they combine results from multiple test methods and/or computational tools that address different KEs of the AOP to estimate skin sensitization potential and sometimes potency. Other approaches are based on statistical tools. Until now, eleven different testing strategies have been published, the majority using the same individual information sources. Our review shows that some of the defined approaches to testing and assessment are able to accurately predict skin sensitization hazard, sometimes even more accurate than the currently used animal test. A few defined approaches are developed to provide an estimate of the potency sub-category of a skin sensitizer as well, but these approaches need further independent evaluation with a new dataset of chemicals. To conclude, this update shows that the field of non-animal approaches for skin sensitization has evolved greatly in recent years and that it is possible to predict skin sensitization hazard without animal testing.
A comparative analysis of numerical approaches to the mechanics of elastic sheets
NASA Astrophysics Data System (ADS)
Taylor, Michael; Davidovitch, Benny; Qiu, Zhanlong; Bertoldi, Katia
2015-06-01
Numerically simulating deformations in thin elastic sheets is a challenging problem in computational mechanics due to destabilizing compressive stresses that result in wrinkling. Determining the location, structure, and evolution of wrinkles in these problems has important implications in design and is an area of increasing interest in the fields of physics and engineering. In this work, several numerical approaches previously proposed to model equilibrium deformations in thin elastic sheets are compared. These include standard finite element-based static post-buckling approaches as well as a recently proposed method based on dynamic relaxation, which are applied to the problem of an annular sheet with opposed tractions where wrinkling is a key feature. Numerical solutions are compared to analytic predictions of the ground state, enabling a quantitative evaluation of the predictive power of the various methods. Results indicate that static finite element approaches produce local minima that are highly sensitive to initial imperfections, relying on a priori knowledge of the equilibrium wrinkling pattern to generate optimal results. In contrast, dynamic relaxation is much less sensitive to initial imperfections and can generate low-energy solutions for a wide variety of loading conditions without requiring knowledge of the equilibrium solution beforehand.
Increasing the sensitivity of reverse phase protein arrays by antibody-mediated signal amplification
2010-01-01
Background Reverse phase protein arrays (RPPA) emerged as a useful experimental platform to analyze biological samples in a high-throughput format. Different signal detection methods have been described to generate a quantitative readout on RPPA including the use of fluorescently labeled antibodies. Increasing the sensitivity of RPPA approaches is important since many signaling proteins or posttranslational modifications are present at a low level. Results A new antibody-mediated signal amplification (AMSA) strategy relying on sequential incubation steps with fluorescently-labeled secondary antibodies reactive against each other is introduced here. The signal quantification is performed in the near-infrared range. The RPPA-based analysis of 14 endogenous proteins in seven different cell lines demonstrated a strong correlation (r = 0.89) between AMSA and standard NIR detection. Probing serial dilutions of human cancer cell lines with different primary antibodies demonstrated that the new amplification approach improved the limit of detection especially for low abundant target proteins. Conclusions Antibody-mediated signal amplification is a convenient and cost-effective approach for the robust and specific quantification of low abundant proteins on RPPAs. Contrasting other amplification approaches it allows target protein detection over a large linear range. PMID:20569466
Goverts, S Theo; Huysmans, Elke; Kramer, Sophia E; de Groot, Annette M B; Houtgast, Tammo
2011-12-01
Researchers have used the distortion-sensitivity approach in the psychoacoustical domain to investigate the role of auditory processing abilities in speech perception in noise (van Schijndel, Houtgast, & Festen, 2001; Goverts & Houtgast, 2010). In this study, the authors examined the potential applicability of the distortion-sensitivity approach for investigating the role of linguistic abilities in speech understanding in noise. The authors applied the distortion-sensitivity approach by measuring the processing of visually presented masked text in a condition with manipulated syntactic, lexical, and semantic cues and while using the Text Reception Threshold (George et al., 2007; Kramer, Zekveld, & Houtgast, 2009; Zekveld, George, Kramer, Goverts, & Houtgast, 2007) method. Two groups that differed in linguistic abilities were studied: 13 native and 10 non-native speakers of Dutch, all typically hearing university students. As expected, the non-native subjects showed substantially reduced performance. The results of the distortion-sensitivity approach yielded differentiated results on the use of specific linguistic cues in the 2 groups. The results show the potential value of the distortion-sensitivity approach in studying the role of linguistic abilities in speech understanding in noise of individuals with hearing impairment.
van de Schoot, Rens; Broere, Joris J.; Perryck, Koen H.; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E.
2015-01-01
Background The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis. PMID:25765534
van de Schoot, Rens; Broere, Joris J; Perryck, Koen H; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E
2015-01-01
Background : The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods : First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results : Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion : We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis.
Salvatore, Stefania; Bramness, Jørgen G; Røislien, Jo
2016-07-12
Wastewater-based epidemiology (WBE) is a novel approach in drug use epidemiology which aims to monitor the extent of use of various drugs in a community. In this study, we investigate functional principal component analysis (FPCA) as a tool for analysing WBE data and compare it to traditional principal component analysis (PCA) and to wavelet principal component analysis (WPCA) which is more flexible temporally. We analysed temporal wastewater data from 42 European cities collected daily over one week in March 2013. The main temporal features of ecstasy (MDMA) were extracted using FPCA using both Fourier and B-spline basis functions with three different smoothing parameters, along with PCA and WPCA with different mother wavelets and shrinkage rules. The stability of FPCA was explored through bootstrapping and analysis of sensitivity to missing data. The first three principal components (PCs), functional principal components (FPCs) and wavelet principal components (WPCs) explained 87.5-99.6 % of the temporal variation between cities, depending on the choice of basis and smoothing. The extracted temporal features from PCA, FPCA and WPCA were consistent. FPCA using Fourier basis and common-optimal smoothing was the most stable and least sensitive to missing data. FPCA is a flexible and analytically tractable method for analysing temporal changes in wastewater data, and is robust to missing data. WPCA did not reveal any rapid temporal changes in the data not captured by FPCA. Overall the results suggest FPCA with Fourier basis functions and common-optimal smoothing parameter as the most accurate approach when analysing WBE data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Python, Francois; Goebel, Carsten; Aeby, Pierre
2009-09-15
The number of studies involved in the development of in vitro skin sensitization tests has increased since the adoption of the EU 7th amendment to the cosmetics directive proposing to ban animal testing for cosmetic ingredients by 2013. Several studies have recently demonstrated that sensitizers induce a relevant up-regulation of activation markers such as CD86, CD54, IL-8 or IL-1{beta} in human myeloid cell lines (e.g., U937, MUTZ-3, THP-1) or in human peripheral blood monocyte-derived dendritic cells (PBMDCs). The present study aimed at the identification of new dendritic cell activation markers in order to further improve the in vitro evaluation ofmore » the sensitizing potential of chemicals. We have compared the gene expression profiles of PBMDCs and the human cell line MUTZ-3 after a 24-h exposure to the moderate sensitizer cinnamaldehyde. A list of 80 genes modulated in both cell types was obtained and a set of candidate marker genes was selected for further analysis. Cells were exposed to selected sensitizers and non-sensitizers for 24 h and gene expression was analyzed by quantitative real-time reverse transcriptase-polymerase chain reaction. Results indicated that PIR, TRIM16 and two Nrf2-regulated genes, CES1 and NQO1, are modulated by most sensitizers. Up-regulation of these genes could also be observed in our recently published DC-activation test with U937 cells. Due to their role in DC activation, these new genes may help to further refine the in vitro approaches for the screening of the sensitizing properties of a chemical.« less
Liu, Hao; Liu, Haodong; Lapidus, Saul H.; ...
2017-06-21
Lithium transition metal oxides are an important class of electrode materials for lithium-ion batteries. Binary or ternary (transition) metal doping brings about new opportunities to improve the electrode’s performance and often leads to more complex stoichiometries and atomic structures than the archetypal LiCoO 2. Rietveld structural analyses of X-ray and neutron diffraction data is a widely-used approach for structural characterization of crystalline materials. But, different structural models and refinement approaches can lead to differing results, and some parameters can be difficult to quantify due to the inherent limitations of the data. Here, through the example of LiNi 0.8Co 0.15Al 0.05Omore » 2 (NCA), we demonstrated the sensitivity of various structural parameters in Rietveld structural analysis to different refinement approaches and structural models, and proposed an approach to reduce refinement uncertainties due to the inexact X-ray scattering factors of the constituent atoms within the lattice. Furthermore, this refinement approach was implemented for electrochemically-cycled NCA samples and yielded accurate structural parameters using only X-ray diffraction data. The present work provides the best practices for performing structural refinement of lithium transition metal oxides.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Hao; Liu, Haodong; Lapidus, Saul H.
Lithium transition metal oxides are an important class of electrode materials for lithium-ion batteries. Binary or ternary (transition) metal doping brings about new opportunities to improve the electrode’s performance and often leads to more complex stoichiometries and atomic structures than the archetypal LiCoO 2. Rietveld structural analyses of X-ray and neutron diffraction data is a widely-used approach for structural characterization of crystalline materials. But, different structural models and refinement approaches can lead to differing results, and some parameters can be difficult to quantify due to the inherent limitations of the data. Here, through the example of LiNi 0.8Co 0.15Al 0.05Omore » 2 (NCA), we demonstrated the sensitivity of various structural parameters in Rietveld structural analysis to different refinement approaches and structural models, and proposed an approach to reduce refinement uncertainties due to the inexact X-ray scattering factors of the constituent atoms within the lattice. Furthermore, this refinement approach was implemented for electrochemically-cycled NCA samples and yielded accurate structural parameters using only X-ray diffraction data. The present work provides the best practices for performing structural refinement of lithium transition metal oxides.« less
Burnum-Johnson, Kristin E; Nie, Song; Casey, Cameron P; Monroe, Matthew E; Orton, Daniel J; Ibrahim, Yehia M; Gritsenko, Marina A; Clauss, Therese R W; Shukla, Anil K; Moore, Ronald J; Purvine, Samuel O; Shi, Tujin; Qian, Weijun; Liu, Tao; Baker, Erin S; Smith, Richard D
2016-12-01
Current proteomic approaches include both broad discovery measurements and quantitative targeted analyses. In many cases, discovery measurements are initially used to identify potentially important proteins (e.g. candidate biomarkers) and then targeted studies are employed to quantify a limited number of selected proteins. Both approaches, however, suffer from limitations. Discovery measurements aim to sample the whole proteome but have lower sensitivity, accuracy, and quantitation precision than targeted approaches, whereas targeted measurements are significantly more sensitive but only sample a limited portion of the proteome. Herein, we describe a new approach that performs both discovery and targeted monitoring (DTM) in a single analysis by combining liquid chromatography, ion mobility spectrometry and mass spectrometry (LC-IMS-MS). In DTM, heavy labeled target peptides are spiked into tryptic digests and both the labeled and unlabeled peptides are detected using LC-IMS-MS instrumentation. Compared with the broad LC-MS discovery measurements, DTM yields greater peptide/protein coverage and detects lower abundance species. DTM also achieved detection limits similar to selected reaction monitoring (SRM) indicating its potential for combined high quality discovery and targeted analyses, which is a significant step toward the convergence of discovery and targeted approaches. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
van Voorn, George A. K.; Ligtenberg, Arend; Molenaar, Jaap
2017-01-01
Adaptation of agents through learning or evolution is an important component of the resilience of Complex Adaptive Systems (CAS). Without adaptation, the flexibility of such systems to cope with outside pressures would be much lower. To study the capabilities of CAS to adapt, social simulations with agent-based models (ABMs) provide a helpful tool. However, the value of ABMs for studying adaptation depends on the availability of methodologies for sensitivity analysis that can quantify resilience and adaptation in ABMs. In this paper we propose a sensitivity analysis methodology that is based on comparing time-dependent probability density functions of output of ABMs with and without agent adaptation. The differences between the probability density functions are quantified by the so-called earth-mover’s distance. We use this sensitivity analysis methodology to quantify the probability of occurrence of critical transitions and other long-term effects of agent adaptation. To test the potential of this new approach, it is used to analyse the resilience of an ABM of adaptive agents competing for a common-pool resource. Adaptation is shown to contribute positively to the resilience of this ABM. If adaptation proceeds sufficiently fast, it may delay or avert the collapse of this system. PMID:28196372
Resilience through adaptation.
Ten Broeke, Guus A; van Voorn, George A K; Ligtenberg, Arend; Molenaar, Jaap
2017-01-01
Adaptation of agents through learning or evolution is an important component of the resilience of Complex Adaptive Systems (CAS). Without adaptation, the flexibility of such systems to cope with outside pressures would be much lower. To study the capabilities of CAS to adapt, social simulations with agent-based models (ABMs) provide a helpful tool. However, the value of ABMs for studying adaptation depends on the availability of methodologies for sensitivity analysis that can quantify resilience and adaptation in ABMs. In this paper we propose a sensitivity analysis methodology that is based on comparing time-dependent probability density functions of output of ABMs with and without agent adaptation. The differences between the probability density functions are quantified by the so-called earth-mover's distance. We use this sensitivity analysis methodology to quantify the probability of occurrence of critical transitions and other long-term effects of agent adaptation. To test the potential of this new approach, it is used to analyse the resilience of an ABM of adaptive agents competing for a common-pool resource. Adaptation is shown to contribute positively to the resilience of this ABM. If adaptation proceeds sufficiently fast, it may delay or avert the collapse of this system.
Sensitivity Analysis for Probabilistic Neural Network Structure Reduction.
Kowalski, Piotr A; Kusy, Maciej
2018-05-01
In this paper, we propose the use of local sensitivity analysis (LSA) for the structure simplification of the probabilistic neural network (PNN). Three algorithms are introduced. The first algorithm applies LSA to the PNN input layer reduction by selecting significant features of input patterns. The second algorithm utilizes LSA to remove redundant pattern neurons of the network. The third algorithm combines the proposed two and constitutes the solution of how they can work together. PNN with a product kernel estimator is used, where each multiplicand computes a one-dimensional Cauchy function. Therefore, the smoothing parameter is separately calculated for each dimension by means of the plug-in method. The classification qualities of the reduced and full structure PNN are compared. Furthermore, we evaluate the performance of PNN, for which global sensitivity analysis (GSA) and the common reduction methods are applied, both in the input layer and the pattern layer. The models are tested on the classification problems of eight repository data sets. A 10-fold cross validation procedure is used to determine the prediction ability of the networks. Based on the obtained results, it is shown that the LSA can be used as an alternative PNN reduction approach.
Respiratory sensitization and allergy: Current research approaches and needs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boverhof, Darrell R.; Billington, Richard; Gollapudi, B. Bhaskar
2008-01-01
There are currently no accepted regulatory models for assessing the potential of a substance to cause respiratory sensitization and allergy. In contrast, a number of models exist for the assessment of contact sensitization and allergic contact dermatitis (ACD). Research indicates that respiratory sensitizers may be identified through contact sensitization assays such as the local lymph node assay, although only a small subset of the compounds that yield positive results in these assays are actually respiratory sensitizers. Due to the increasing health concerns associated with occupational asthma and the impending directives on the regulation of respiratory sensitizers and allergens, an approachmore » which can identify these compounds and distinguish them from contact sensitizers is required. This report discusses some of the important contrasts between respiratory allergy and ACD, and highlights several prominent in vivo, in vitro and in silico approaches that are being applied or could be further developed to identify compounds capable of causing respiratory allergy. Although a number of animal models have been used for researching respiratory sensitization and allergy, protocols and endpoints for these approaches are often inconsistent, costly and difficult to reproduce, thereby limiting meaningful comparisons of data between laboratories and development of a consensus approach. A number of emerging in vitro and in silico models show promise for use in the characterization of contact sensitization potential and should be further explored for their ability to identify and differentiate contact and respiratory sensitizers. Ultimately, the development of a consistent, accurate and cost-effective model will likely incorporate a number of these approaches and will require effective communication, collaboration and consensus among all stakeholders.« less
NASA Technical Reports Server (NTRS)
Lyle, Karen H.
2014-01-01
Acceptance of new spacecraft structural architectures and concepts requires validated design methods to minimize the expense involved with technology validation via flighttesting. This paper explores the implementation of probabilistic methods in the sensitivity analysis of the structural response of a Hypersonic Inflatable Aerodynamic Decelerator (HIAD). HIAD architectures are attractive for spacecraft deceleration because they are lightweight, store compactly, and utilize the atmosphere to decelerate a spacecraft during re-entry. However, designers are hesitant to include these inflatable approaches for large payloads or spacecraft because of the lack of flight validation. In the example presented here, the structural parameters of an existing HIAD model have been varied to illustrate the design approach utilizing uncertainty-based methods. Surrogate models have been used to reduce computational expense several orders of magnitude. The suitability of the design is based on assessing variation in the resulting cone angle. The acceptable cone angle variation would rely on the aerodynamic requirements.