Sample records for generalized sensitivity analysis

  1. Sensitivity analysis for parametric generalized implicit quasi-variational-like inclusions involving P-[eta]-accretive mappings

    NASA Astrophysics Data System (ADS)

    Kazmi, K. R.; Khan, F. A.

    2008-01-01

    In this paper, using proximal-point mapping technique of P-[eta]-accretive mapping and the property of the fixed-point set of set-valued contractive mappings, we study the behavior and sensitivity analysis of the solution set of a parametric generalized implicit quasi-variational-like inclusion involving P-[eta]-accretive mapping in real uniformly smooth Banach space. Further, under suitable conditions, we discuss the Lipschitz continuity of the solution set with respect to the parameter. The technique and results presented in this paper can be viewed as extension of the techniques and corresponding results given in [R.P. Agarwal, Y.-J. Cho, N.-J. Huang, Sensitivity analysis for strongly nonlinear quasi-variational inclusions, Appl. MathE Lett. 13 (2002) 19-24; S. Dafermos, Sensitivity analysis in variational inequalities, Math. Oper. Res. 13 (1988) 421-434; X.-P. Ding, Sensitivity analysis for generalized nonlinear implicit quasi-variational inclusions, Appl. Math. Lett. 17 (2) (2004) 225-235; X.-P. Ding, Parametric completely generalized mixed implicit quasi-variational inclusions involving h-maximal monotone mappings, J. Comput. Appl. Math. 182 (2) (2005) 252-269; X.-P. Ding, C.L. Luo, On parametric generalized quasi-variational inequalities, J. Optim. Theory Appl. 100 (1999) 195-205; Z. Liu, L. Debnath, S.M. Kang, J.S. Ume, Sensitivity analysis for parametric completely generalized nonlinear implicit quasi-variational inclusions, J. Math. Anal. Appl. 277 (1) (2003) 142-154; R.N. Mukherjee, H.L. Verma, Sensitivity analysis of generalized variational inequalities, J. Math. Anal. Appl. 167 (1992) 299-304; M.A. Noor, Sensitivity analysis framework for general quasi-variational inclusions, Comput. Math. Appl. 44 (2002) 1175-1181; M.A. Noor, Sensitivity analysis for quasivariational inclusions, J. Math. Anal. Appl. 236 (1999) 290-299; J.Y. Park, J.U. Jeong, Parametric generalized mixed variational inequalities, Appl. Math. Lett. 17 (2004) 43-48].

  2. Sensitivity Analysis in Engineering

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)

    1987-01-01

    The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.

  3. ‎ Anxiety Sensitivity Dimensions and Generalized Anxiety‏ ‏Severity: The ‎Mediating Role of Experiential Avoidance and Repetitive‏ ‏Negative Thinking‎ ‎

    PubMed Central

    Mohammadkhani, Parvaneh; Pourshahbaz, Abbas; Kami, Maryam; Mazidi, Mahdi; Abasi, Imaneh‏

    2016-01-01

    Objective: Generalized anxiety disorder is one of the most common anxiety disorders in the general ‎population. Several studies suggest that anxiety sensitivity is a vulnerability factor in generalized ‎anxiety severity. However, some other studies suggest that negative repetitive thinking and ‎experiential avoidance as response factors can explain this relationship. Therefore, this study ‎aimed to investigate the mediating role of experiential avoidance and negative repetitive thinking ‎in the relationship between anxiety sensitivity and generalized anxiety severity.‎ Method: This was a cross-sectional and correlational study. A sample of 475 university students was ‎selected through stratified sampling method. The participants completed Anxiety Sensitivity ‎Inventory-3, Acceptance and Action Questionnaire-II, Perseverative Thinking Questionnaire, and ‎Generalized Anxiety Disorder 7-item Scale. Data were analyzed by Pearson correlation, multiple ‎regression analysis and path analysis.‎ Results: The results revealed a positive relationship between anxiety sensitivity, particularly cognitive ‎anxiety sensitivity, experiential avoidance, repetitive thinking and generalized anxiety severity. In ‎addition, findings showed that repetitive thinking, but not experiential avoidance, fully mediated ‎the relationship between cognitive anxiety sensitivity and generalized anxiety severity. α Level ‎was p<0.005.‎ Conclusion: Consistent with the trans-diagnostic hypothesis, anxiety sensitivity predicts generalized anxiety‏ ‏severity, but its effect is due to the generating repetitive negative thought.‎ PMID:27928245

  4. Predictive Uncertainty And Parameter Sensitivity Of A Sediment-Flux Model: Nitrogen Flux and Sediment Oxygen Demand

    EPA Science Inventory

    Estimating model predictive uncertainty is imperative to informed environmental decision making and management of water resources. This paper applies the Generalized Sensitivity Analysis (GSA) to examine parameter sensitivity and the Generalized Likelihood Uncertainty Estimation...

  5. Generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test.

    PubMed

    Munir, Mohammad

    2018-06-01

    Generalized sensitivity functions characterize the sensitivity of the parameter estimates with respect to the nominal parameters. We observe from the generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test that the measurements of insulin, 62 min after the administration of the glucose bolus into the experimental subject's body, possess no information about the parameter estimates. The glucose measurements possess the information about the parameter estimates up to three hours. These observations have been verified by the parameter estimation of the minimal model. The standard errors of the estimates and crude Monte Carlo process also confirm this observation. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Design sensitivity analysis of nonlinear structural response

    NASA Technical Reports Server (NTRS)

    Cardoso, J. B.; Arora, J. S.

    1987-01-01

    A unified theory is described of design sensitivity analysis of linear and nonlinear structures for shape, nonshape and material selection problems. The concepts of reference volume and adjoint structure are used to develop the unified viewpoint. A general formula for design sensitivity analysis is derived. Simple analytical linear and nonlinear examples are used to interpret various terms of the formula and demonstrate its use.

  7. Sensitivity analysis of a wing aeroelastic response

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Eldred, Lloyd B.; Barthelemy, Jean-Francois M.

    1991-01-01

    A variation of Sobieski's Global Sensitivity Equations (GSE) approach is implemented to obtain the sensitivity of the static aeroelastic response of a three-dimensional wing model. The formulation is quite general and accepts any aerodynamics and structural analysis capability. An interface code is written to convert one analysis's output to the other's input, and visa versa. Local sensitivity derivatives are calculated by either analytic methods or finite difference techniques. A program to combine the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives is developed. The aerodynamic analysis package FAST, using a lifting surface theory, and a structural package, ELAPS, implementing Giles' equivalent plate model are used.

  8. Sensitivity analysis as an aid in modelling and control of (poorly-defined) ecological systems. [closed ecological systems

    NASA Technical Reports Server (NTRS)

    Hornberger, G. M.; Rastetter, E. B.

    1982-01-01

    A literature review of the use of sensitivity analyses in modelling nonlinear, ill-defined systems, such as ecological interactions is presented. Discussions of previous work, and a proposed scheme for generalized sensitivity analysis applicable to ill-defined systems are included. This scheme considers classes of mathematical models, problem-defining behavior, analysis procedures (especially the use of Monte-Carlo methods), sensitivity ranking of parameters, and extension to control system design.

  9. Sensitivity Analysis for some Water Pollution Problem

    NASA Astrophysics Data System (ADS)

    Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff

    2014-05-01

    Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .

  10. A discourse on sensitivity analysis for discretely-modeled structures

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Haftka, Raphael T.

    1991-01-01

    A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.

  11. Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)

    1996-01-01

    Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.

  12. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  13. Efficient Analysis of Complex Structures

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.

    2000-01-01

    Last various accomplishments achieved during this project are : (1) A Survey of Neural Network (NN) applications using MATLAB NN Toolbox on structural engineering especially on equivalent continuum models (Appendix A). (2) Application of NN and GAs to simulate and synthesize substructures: 1-D and 2-D beam problems (Appendix B). (3) Development of an equivalent plate-model analysis method (EPA) for static and vibration analysis of general trapezoidal built-up wing structures composed of skins, spars and ribs. Calculation of all sorts of test cases and comparison with measurements or FEA results. (Appendix C). (4) Basic work on using second order sensitivities on simulating wing modal response, discussion of sensitivity evaluation approaches, and some results (Appendix D). (5) Establishing a general methodology of simulating the modal responses by direct application of NN and by sensitivity techniques, in a design space composed of a number of design points. Comparison is made through examples using these two methods (Appendix E). (6) Establishing a general methodology of efficient analysis of complex wing structures by indirect application of NN: the NN-aided Equivalent Plate Analysis. Training of the Neural Networks for this purpose in several cases of design spaces, which can be applicable for actual design of complex wings (Appendix F).

  14. General methods for sensitivity analysis of equilibrium dynamics in patch occupancy models

    USGS Publications Warehouse

    Miller, David A.W.

    2012-01-01

    Sensitivity analysis is a useful tool for the study of ecological models that has many potential applications for patch occupancy modeling. Drawing from the rich foundation of existing methods for Markov chain models, I demonstrate new methods for sensitivity analysis of the equilibrium state dynamics of occupancy models. Estimates from three previous studies are used to illustrate the utility of the sensitivity calculations: a joint occupancy model for a prey species, its predators, and habitat used by both; occurrence dynamics from a well-known metapopulation study of three butterfly species; and Golden Eagle occupancy and reproductive dynamics. I show how to deal efficiently with multistate models and how to calculate sensitivities involving derived state variables and lower-level parameters. In addition, I extend methods to incorporate environmental variation by allowing for spatial and temporal variability in transition probabilities. The approach used here is concise and general and can fully account for environmental variability in transition parameters. The methods can be used to improve inferences in occupancy studies by quantifying the effects of underlying parameters, aiding prediction of future system states, and identifying priorities for sampling effort.

  15. Meta-analysis for the comparison of two diagnostic tests to a common gold standard: A generalized linear mixed model approach.

    PubMed

    Hoyer, Annika; Kuss, Oliver

    2018-05-01

    Meta-analysis of diagnostic studies is still a rapidly developing area of biostatistical research. Especially, there is an increasing interest in methods to compare different diagnostic tests to a common gold standard. Restricting to the case of two diagnostic tests, in these meta-analyses the parameters of interest are the differences of sensitivities and specificities (with their corresponding confidence intervals) between the two diagnostic tests while accounting for the various associations across single studies and between the two tests. We propose statistical models with a quadrivariate response (where sensitivity of test 1, specificity of test 1, sensitivity of test 2, and specificity of test 2 are the four responses) as a sensible approach to this task. Using a quadrivariate generalized linear mixed model naturally generalizes the common standard bivariate model of meta-analysis for a single diagnostic test. If information on several thresholds of the tests is available, the quadrivariate model can be further generalized to yield a comparison of full receiver operating characteristic (ROC) curves. We illustrate our model by an example where two screening methods for the diagnosis of type 2 diabetes are compared.

  16. Quantitative mass spectrometry methods for pharmaceutical analysis

    PubMed Central

    Loos, Glenn; Van Schepdael, Ann

    2016-01-01

    Quantitative pharmaceutical analysis is nowadays frequently executed using mass spectrometry. Electrospray ionization coupled to a (hybrid) triple quadrupole mass spectrometer is generally used in combination with solid-phase extraction and liquid chromatography. Furthermore, isotopically labelled standards are often used to correct for ion suppression. The challenges in producing sensitive but reliable quantitative data depend on the instrumentation, sample preparation and hyphenated techniques. In this contribution, different approaches to enhance the ionization efficiencies using modified source geometries and improved ion guidance are provided. Furthermore, possibilities to minimize, assess and correct for matrix interferences caused by co-eluting substances are described. With the focus on pharmaceuticals in the environment and bioanalysis, different separation techniques, trends in liquid chromatography and sample preparation methods to minimize matrix effects and increase sensitivity are discussed. Although highly sensitive methods are generally aimed for to provide automated multi-residue analysis, (less sensitive) miniaturized set-ups have a great potential due to their ability for in-field usage. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644982

  17. Sensitivity Analysis of the Static Aeroelastic Response of a Wing

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.

    1993-01-01

    A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.

  18. GLSENS: A Generalized Extension of LSENS Including Global Reactions and Added Sensitivity Analysis for the Perfectly Stirred Reactor

    NASA Technical Reports Server (NTRS)

    Bittker, David A.

    1996-01-01

    A generalized version of the NASA Lewis general kinetics code, LSENS, is described. The new code allows the use of global reactions as well as molecular processes in a chemical mechanism. The code also incorporates the capability of performing sensitivity analysis calculations for a perfectly stirred reactor rapidly and conveniently at the same time that the main kinetics calculations are being done. The GLSENS code has been extensively tested and has been found to be accurate and efficient. Nine example problems are presented and complete user instructions are given for the new capabilities. This report is to be used in conjunction with the documentation for the original LSENS code.

  19. Shape design sensitivity analysis and optimization of three dimensional elastic solids using geometric modeling and automatic regridding. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Yao, Tse-Min; Choi, Kyung K.

    1987-01-01

    An automatic regridding method and a three dimensional shape design parameterization technique were constructed and integrated into a unified theory of shape design sensitivity analysis. An algorithm was developed for general shape design sensitivity analysis of three dimensional eleastic solids. Numerical implementation of this shape design sensitivity analysis method was carried out using the finite element code ANSYS. The unified theory of shape design sensitivity analysis uses the material derivative of continuum mechanics with a design velocity field that represents shape change effects over the structural design. Automatic regridding methods were developed by generating a domain velocity field with boundary displacement method. Shape design parameterization for three dimensional surface design problems was illustrated using a Bezier surface with boundary perturbations that depend linearly on the perturbation of design parameters. A linearization method of optimization, LINRM, was used to obtain optimum shapes. Three examples from different engineering disciplines were investigated to demonstrate the accuracy and versatility of this shape design sensitivity analysis method.

  20. Eigenvalue Contributon Estimator for Sensitivity Calculations with TSUNAMI-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Williams, Mark L

    2007-01-01

    Since the release of the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) codes in SCALE [1], the use of sensitivity and uncertainty analysis techniques for criticality safety applications has greatly increased within the user community. In general, sensitivity and uncertainty analysis is transitioning from a technique used only by specialists to a practical tool in routine use. With the desire to use the tool more routinely comes the need to improve the solution methodology to reduce the input and computational burden on the user. This paper reviews the current solution methodology of the Monte Carlo eigenvalue sensitivity analysismore » sequence TSUNAMI-3D, describes an alternative approach, and presents results from both methodologies.« less

  1. A hydrogeologic framework for characterizing summer streamflow sensitivity to climate warming in the Pacific Northwest, USA

    NASA Astrophysics Data System (ADS)

    Safeeq, M.; Grant, G. E.; Lewis, S. L.; Kramer, M. G.; Staab, B.

    2014-09-01

    Summer streamflows in the Pacific Northwest are largely derived from melting snow and groundwater discharge. As the climate warms, diminishing snowpack and earlier snowmelt will cause reductions in summer streamflow. Most regional-scale assessments of climate change impacts on streamflow use downscaled temperature and precipitation projections from general circulation models (GCMs) coupled with large-scale hydrologic models. Here we develop and apply an analytical hydrogeologic framework for characterizing summer streamflow sensitivity to a change in the timing and magnitude of recharge in a spatially explicit fashion. In particular, we incorporate the role of deep groundwater, which large-scale hydrologic models generally fail to capture, into streamflow sensitivity assessments. We validate our analytical streamflow sensitivities against two empirical measures of sensitivity derived using historical observations of temperature, precipitation, and streamflow from 217 watersheds. In general, empirically and analytically derived streamflow sensitivity values correspond. Although the selected watersheds cover a range of hydrologic regimes (e.g., rain-dominated, mixture of rain and snow, and snow-dominated), sensitivity validation was primarily driven by the snow-dominated watersheds, which are subjected to a wider range of change in recharge timing and magnitude as a result of increased temperature. Overall, two patterns emerge from this analysis: first, areas with high streamflow sensitivity also have higher summer streamflows as compared to low-sensitivity areas. Second, the level of sensitivity and spatial extent of highly sensitive areas diminishes over time as the summer progresses. Results of this analysis point to a robust, practical, and scalable approach that can help assess risk at the landscape scale, complement the downscaling approach, be applied to any climate scenario of interest, and provide a framework to assist land and water managers in adapting to an uncertain and potentially challenging future.

  2. Rates of allergic sensitization and irritation to oxybenzone-containing sunscreen products: a quantitative meta-analysis of 64 exaggerated use studies.

    PubMed

    Agin, Patricia P; Ruble, Karen; Hermansky, Steven J; McCarthy, Timothy J

    2008-08-01

    Oxybenzone is an active ingredient found in sunscreen products that absorbs a broad spectrum of ultraviolet (UV) light, with absorbance peaking in the UVB region and extending into the UVA region. Although the overall incidence of sensitization and irritation associated with oxybenzone in the general population remains unclear, a few studies have reported on the incidence in specific circumstances. However, the relevance of these studies to the general population is limited, because the sample populations reported in these papers generally have consisted of individuals who sought medical attention for pre-existing skin conditions. Therefore, the reported incidence of allergic reactions to oxybenzone in these studies may be overestimated as related to the general population. The objective of this meta-analysis was to determine the safety of oxybenzone in participants recruited from the general population. The data from 64 unpublished exaggerated use human repeat insult patch tests (HRIPT) and photoallergy (PA) studies sponsored by Schering-Plough HealthCare Products Inc. between 1992 and 2006 were aggregated and analyzed to evaluate the irritancy and sensitization potential of sunscreen products containing oxybenzone at concentrations between 1% and 6%. Forty-eight of 19 570 possible dermal responses were considered to be suggestive of irritation or sensitization; the mean rate of responses across all formulations was 0.26%. Sensitization rates did not correlate significantly with oxybenzone concentration. The available re-challenge data indicated that only eight of these responses were contact allergies from oxybenzone, and the mean rate of contact allergy to oxybenzone was 0.07%. The source of the skin responses was not confirmed for 15 subjects who were lost to follow-up. However, all subjects were given the opportunity to participate in follow-up testing. Our data indicate that sunscreen products formulated with 1-6% oxybenzone do not possess a significant sensitization or irritation potential for the general public. Furthermore, these data suggest that the incidence rate implied in the published literature overestimates the actual incidence of sensitization/irritation due to oxybenzone-containing sunscreen products in the general population.

  3. Effects of habitat map generalization in biodiversity assessment

    NASA Technical Reports Server (NTRS)

    Stoms, David M.

    1992-01-01

    Species richness is being mapped as part of an inventory of biological diversity in California (i.e., gap analysis). Species distributions are modeled with a GIS on the basis of maps of each species' preferred habitats. Species richness is then tallied in equal-area sampling units. A GIS sensitivity analysis examined the effects of the level of generalization of the habitat map on the predicted distribution of species richness in the southern Sierra Nevada. As the habitat map was generalized, the number of habitat types mapped within grid cells tended to decrease with a corresponding decline in numbers of species predicted. Further, the ranking of grid cells in order of predicted numbers of species changed dramatically between levels of generalization. Areas predicted to be of greatest conservation value on the basis of species richness may therefore be sensitive to GIS data resolution.

  4. New Uses for Sensitivity Analysis: How Different Movement Tasks Effect Limb Model Parameter Sensitivity

    NASA Technical Reports Server (NTRS)

    Winters, J. M.; Stark, L.

    1984-01-01

    Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.

  5. Variational Methods in Design Optimization and Sensitivity Analysis for Two-Dimensional Euler Equations

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.

    1997-01-01

    Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.

  6. Diagnostic Accuracy of Memory Measures in Alzheimer’s Dementia and Mild Cognitive Impairment: a Systematic Review and Meta-Analysis

    PubMed Central

    Weissberger, Gali H.; Strong, Jessica V.; Stefanidis, Kayla B.; Summers, Mathew J.; Bondi, Mark W.; Stricker, Nikki H.

    2018-01-01

    With an increasing focus on biomarkers in dementia research, illustrating the role of neuropsychological assessment in detecting mild cognitive impairment (MCI) and Alzheimer’s dementia (AD) is important. This systematic review and meta-analysis, conducted in accordance with PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) standards, summarizes the sensitivity and specificity of memory measures in individuals with MCI and AD. Both meta-analytic and qualitative examination of AD versus healthy control (HC) studies (n = 47) revealed generally high sensitivity and specificity (≥ 80% for AD comparisons) for measures of immediate (sensitivity = 87%, specificity = 88%) and delayed memory (sensitivity = 89%, specificity = 89%), especially those involving word-list recall. Examination of MCI versus HC studies (n = 38) revealed generally lower diagnostic accuracy for both immediate (sensitivity = 72%, specificity = 81%) and delayed memory (sensitivity = 75%, specificity = 81%). Measures that differentiated AD from other conditions (n = 10 studies) yielded mixed results, with generally high sensitivity in the context of low or variable specificity. Results confirm that memory measures have high diagnostic accuracy for identification of AD, are promising but require further refinement for identification of MCI, and provide support for ongoing investigation of neuropsychological assessment as a cognitive biomarker of preclinical AD. Emphasizing diagnostic test accuracy statistics over null hypothesis testing in future studies will promote the ongoing use of neuropsychological tests as Alzheimer’s disease research and clinical criteria increasingly rely upon cerebrospinal fluid (CSF) and neuroimaging biomarkers. PMID:28940127

  7. Installation Restoration General Environmental Technology Development. Task 6. Materials Handling of Explosive Contaminated Soil and Sediment.

    DTIC Science & Technology

    1985-06-01

    of chemical analysis and sensitivity testing on material samples . At this 4 time, these samples must be packaged and...preparation at a rate of three samples per hour. One analyst doing both sample preparation and the HPLC analysis can run 16 samples in an 8-hour day. II... study , sensitivity testing was reviewed to enable recommendations for complete analysis of contaminated soils. Materials handling techniques,

  8. Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems With Switching [Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems

    DOE PAGES

    Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil; ...

    2017-01-24

    Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less

  9. Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems With Switching [Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil

    Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less

  10. Development of a generalized perturbation theory method for sensitivity analysis using continuous-energy Monte Carlo methods

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.

    2016-03-01

    The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less

  11. Fast and sensitive optical toxicity bioassay based on dual wavelength analysis of bacterial ferricyanide reduction kinetics.

    PubMed

    Pujol-Vila, F; Vigués, N; Díaz-González, M; Muñoz-Berbel, X; Mas, J

    2015-05-15

    Global urban and industrial growth, with the associated environmental contamination, is promoting the development of rapid and inexpensive general toxicity methods. Current microbial methodologies for general toxicity determination rely on either bioluminescent bacteria and specific medium solution (i.e. Microtox(®)) or low sensitivity and diffusion limited protocols (i.e. amperometric microbial respirometry). In this work, fast and sensitive optical toxicity bioassay based on dual wavelength analysis of bacterial ferricyanide reduction kinetics is presented, using Escherichia coli as a bacterial model. Ferricyanide reduction kinetic analysis (variation of ferricyanide absorption with time), much more sensitive than single absorbance measurements, allowed for direct and fast toxicity determination without pre-incubation steps (assay time=10 min) and minimizing biomass interference. Dual wavelength analysis at 405 (ferricyanide and biomass) and 550 nm (biomass), allowed for ferricyanide monitoring without interference of biomass scattering. On the other hand, refractive index (RI) matching with saccharose reduced bacterial light scattering around 50%, expanding the analytical linear range in the determination of absorbent molecules. With this method, different toxicants such as metals and organic compounds were analyzed with good sensitivities. Half maximal effective concentrations (EC50) obtained after 10 min bioassay, 2.9, 1.0, 0.7 and 18.3 mg L(-1) for copper, zinc, acetic acid and 2-phenylethanol respectively, were in agreement with previously reported values for longer bioassays (around 60 min). This method represents a promising alternative for fast and sensitive water toxicity monitoring, opening the possibility of quick in situ analysis. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Sensitivity analysis, approximate analysis, and design optimization for internal and external viscous flows

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.; Korivi, Vamshi M.

    1991-01-01

    A gradient-based design optimization strategy for practical aerodynamic design applications is presented, which uses the 2D thin-layer Navier-Stokes equations. The strategy is based on the classic idea of constructing different modules for performing the major tasks such as function evaluation, function approximation and sensitivity analysis, mesh regeneration, and grid sensitivity analysis, all driven and controlled by a general-purpose design optimization program. The accuracy of aerodynamic shape sensitivity derivatives is validated on two viscous test problems: internal flow through a double-throat nozzle and external flow over a NACA 4-digit airfoil. A significant improvement in aerodynamic performance has been achieved in both cases. Particular attention is given to a consistent treatment of the boundary conditions in the calculation of the aerodynamic sensitivity derivatives for the classic problems of external flow over an isolated lifting airfoil on 'C' or 'O' meshes.

  13. SCALE 6.2 Continuous-Energy TSUNAMI-3D Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2015-01-01

    The TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation) capabilities within the SCALE code system make use of sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different systems, quantifying computational biases, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved ease of use and fidelity and the desire to extend TSUNAMI analysis to advanced applications have motivated the development of a SCALE 6.2 module for calculating sensitivity coefficients using three-dimensional (3D) continuous-energy (CE) Montemore » Carlo methods: CE TSUNAMI-3D. This paper provides an overview of the theory, implementation, and capabilities of the CE TSUNAMI-3D sensitivity analysis methods. CE TSUNAMI contains two methods for calculating sensitivity coefficients in eigenvalue sensitivity applications: (1) the Iterated Fission Probability (IFP) method and (2) the Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization (CLUTCH) method. This work also presents the GEneralized Adjoint Response in Monte Carlo method (GEAR-MC), a first-of-its-kind approach for calculating adjoint-weighted, generalized response sensitivity coefficients—such as flux responses or reaction rate ratios—in CE Monte Carlo applications. The accuracy and efficiency of the CE TSUNAMI-3D eigenvalue sensitivity methods are assessed from a user perspective in a companion publication, and the accuracy and features of the CE TSUNAMI-3D GEAR-MC methods are detailed in this paper.« less

  14. Implementation of structural response sensitivity calculations in a large-scale finite-element analysis system

    NASA Technical Reports Server (NTRS)

    Giles, G. L.; Rogers, J. L., Jr.

    1982-01-01

    The methodology used to implement structural sensitivity calculations into a major, general-purpose finite-element analysis system (SPAR) is described. This implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calculating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of SPAR are also discussed.

  15. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 3: Illustrative test problems

    NASA Technical Reports Server (NTRS)

    Bittker, David A.; Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 3 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 3 explains the kinetics and kinetics-plus-sensitivity analysis problems supplied with LSENS and presents sample results. These problems illustrate the various capabilities of, and reaction models that can be solved by, the code and may provide a convenient starting point for the user to construct the problem data file required to execute LSENS. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  16. Economic evaluation of an implementation strategy for the management of low back pain in general practice.

    PubMed

    Jensen, Cathrine Elgaard; Riis, Allan; Petersen, Karin Dam; Jensen, Martin Bach; Pedersen, Kjeld Møller

    2017-05-01

    In connection with the publication of a clinical practice guideline on the management of low back pain (LBP) in general practice in Denmark, a cluster randomised controlled trial was conducted. In this trial, a multifaceted guideline implementation strategy to improve general practitioners' treatment of patients with LBP was compared with a usual implementation strategy. The aim was to determine whether the multifaceted strategy was cost effective, as compared with the usual implementation strategy. The economic evaluation was conducted as a cost-utility analysis where cost collected from a societal perspective and quality-adjusted life years were used as outcome measures. The analysis was conducted as a within-trial analysis with a 12-month time horizon consistent with the follow-up period of the clinical trial. To adjust for a priori selected covariates, generalised linear models with a gamma family were used to estimate incremental costs and quality-adjusted life years. Furthermore, both deterministic and probabilistic sensitivity analyses were conducted. Results showed that costs associated with primary health care were higher, whereas secondary health care costs were lower for the intervention group when compared with the control group. When adjusting for covariates, the intervention was less costly, and there was no significant difference in effect between the 2 groups. Sensitivity analyses showed that results were sensitive to uncertainty. In conclusion, the multifaceted implementation strategy was cost saving when compared with the usual strategy for implementing LBP clinical practice guidelines in general practice. Furthermore, there was no significant difference in effect, and the estimate was sensitive to uncertainty.

  17. Simulations of the HDO and H2O-18 atmospheric cycles using the NASA GISS general circulation model - Sensitivity experiments for present-day conditions

    NASA Technical Reports Server (NTRS)

    Jouzel, Jean; Koster, R. D.; Suozzo, R. J.; Russell, G. L.; White, J. W. C.

    1991-01-01

    Incorporating the full geochemical cycles of stable water isotopes (HDO and H2O-18) into an atmospheric general circulation model (GCM) allows an improved understanding of global delta-D and delta-O-18 distributions and might even allow an analysis of the GCM's hydrological cycle. A detailed sensitivity analysis using the NASA/Goddard Institute for Space Studies (GISS) model II GCM is presented that examines the nature of isotope modeling. The tests indicate that delta-D and delta-O-18 values in nonpolar regions are not strongly sensitive to details in the model precipitation parameterizations. This result, while implying that isotope modeling has limited potential use in the calibration of GCM convection schemes, also suggests that certain necessarily arbitrary aspects of these schemes are adequate for many isotope studies. Deuterium excess, a second-order variable, does show some sensitivity to precipitation parameterization and thus may be more useful for GCM calibration.

  18. Diagnostic Accuracy of Memory Measures in Alzheimer's Dementia and Mild Cognitive Impairment: a Systematic Review and Meta-Analysis.

    PubMed

    Weissberger, Gali H; Strong, Jessica V; Stefanidis, Kayla B; Summers, Mathew J; Bondi, Mark W; Stricker, Nikki H

    2017-12-01

    With an increasing focus on biomarkers in dementia research, illustrating the role of neuropsychological assessment in detecting mild cognitive impairment (MCI) and Alzheimer's dementia (AD) is important. This systematic review and meta-analysis, conducted in accordance with PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) standards, summarizes the sensitivity and specificity of memory measures in individuals with MCI and AD. Both meta-analytic and qualitative examination of AD versus healthy control (HC) studies (n = 47) revealed generally high sensitivity and specificity (≥ 80% for AD comparisons) for measures of immediate (sensitivity = 87%, specificity = 88%) and delayed memory (sensitivity = 89%, specificity = 89%), especially those involving word-list recall. Examination of MCI versus HC studies (n = 38) revealed generally lower diagnostic accuracy for both immediate (sensitivity = 72%, specificity = 81%) and delayed memory (sensitivity = 75%, specificity = 81%). Measures that differentiated AD from other conditions (n = 10 studies) yielded mixed results, with generally high sensitivity in the context of low or variable specificity. Results confirm that memory measures have high diagnostic accuracy for identification of AD, are promising but require further refinement for identification of MCI, and provide support for ongoing investigation of neuropsychological assessment as a cognitive biomarker of preclinical AD. Emphasizing diagnostic test accuracy statistics over null hypothesis testing in future studies will promote the ongoing use of neuropsychological tests as Alzheimer's disease research and clinical criteria increasingly rely upon cerebrospinal fluid (CSF) and neuroimaging biomarkers.

  19. Sensitivity of wildlife habitat models to uncertainties in GIS data

    NASA Technical Reports Server (NTRS)

    Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.

    1992-01-01

    Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.

  20. Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries

    DOE PAGES

    Lu, Zhiming

    2018-01-30

    Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less

  1. Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Zhiming

    Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less

  2. Probabilistic sensitivity analysis for NICE technology assessment: not an optional extra.

    PubMed

    Claxton, Karl; Sculpher, Mark; McCabe, Chris; Briggs, Andrew; Akehurst, Ron; Buxton, Martin; Brazier, John; O'Hagan, Tony

    2005-04-01

    Recently the National Institute for Clinical Excellence (NICE) updated its methods guidance for technology assessment. One aspect of the new guidance is to require the use of probabilistic sensitivity analysis with all cost-effectiveness models submitted to the Institute. The purpose of this paper is to place the NICE guidance on dealing with uncertainty into a broader context of the requirements for decision making; to explain the general approach that was taken in its development; and to address each of the issues which have been raised in the debate about the role of probabilistic sensitivity analysis in general. The most appropriate starting point for developing guidance is to establish what is required for decision making. On the basis of these requirements, the methods and framework of analysis which can best meet these needs can then be identified. It will be argued that the guidance on dealing with uncertainty and, in particular, the requirement for probabilistic sensitivity analysis, is justified by the requirements of the type of decisions that NICE is asked to make. Given this foundation, the main issues and criticisms raised during and after the consultation process are reviewed. Finally, some of the methodological challenges posed by the need fully to characterise decision uncertainty and to inform the research agenda will be identified and discussed. Copyright (c) 2005 John Wiley & Sons, Ltd.

  3. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  4. A general method for handling missing binary outcome data in randomized controlled trials

    PubMed Central

    Jackson, Dan; White, Ian R; Mason, Dan; Sutton, Stephen

    2014-01-01

    Aims The analysis of randomized controlled trials with incomplete binary outcome data is challenging. We develop a general method for exploring the impact of missing data in such trials, with a focus on abstinence outcomes. Design We propose a sensitivity analysis where standard analyses, which could include ‘missing = smoking’ and ‘last observation carried forward’, are embedded in a wider class of models. Setting We apply our general method to data from two smoking cessation trials. Participants A total of 489 and 1758 participants from two smoking cessation trials. Measurements The abstinence outcomes were obtained using telephone interviews. Findings The estimated intervention effects from both trials depend on the sensitivity parameters used. The findings differ considerably in magnitude and statistical significance under quite extreme assumptions about the missing data, but are reasonably consistent under more moderate assumptions. Conclusions A new method for undertaking sensitivity analyses when handling missing data in trials with binary outcomes allows a wide range of assumptions about the missing data to be assessed. In two smoking cessation trials the results were insensitive to all but extreme assumptions. PMID:25171441

  5. LSENS, a general chemical kinetics and sensitivity analysis code for gas-phase reactions: User's guide

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1993-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS, are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include static system, steady, one-dimensional, inviscid flow, shock initiated reaction, and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method, which works efficiently for the extremes of very fast and very slow reaction, is used for solving the 'stiff' differential equation systems that arise in chemical kinetics. For static reactions, sensitivity coefficients of all dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters can be computed. This paper presents descriptions of the code and its usage, and includes several illustrative example problems.

  6. Intelligence and Interpersonal Sensitivity: A Meta-Analysis

    ERIC Educational Resources Information Center

    Murphy, Nora A.; Hall, Judith A.

    2011-01-01

    A meta-analytic review investigated the association between general intelligence and interpersonal sensitivity. The review involved 38 independent samples with 2988 total participants. There was a highly significant small-to-medium effect for intelligence measures to be correlated with decoding accuracy (r=0.19, p less than 0.001). Significant…

  7. Global sensitivity analysis in stochastic simulators of uncertain reaction networks.

    PubMed

    Navarro Jimenez, M; Le Maître, O P; Knio, O M

    2016-12-28

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  8. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    DOE PAGES

    Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.

    2016-12-23

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes thatmore » the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. Here, a sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.« less

  9. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    NASA Astrophysics Data System (ADS)

    Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.

    2016-12-01

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  10. FURTHER ANALYSIS OF SUBTYPES OF AUTOMATICALLY REINFORCED SIB: A REPLICATION AND QUANTITATIVE ANALYSIS OF PUBLISHED DATASETS

    PubMed Central

    Hagopian, Louis P.; Rooker, Griffin W.; Zarcone, Jennifer R.; Bonner, Andrew C.; Arevalo, Alexander R.

    2017-01-01

    Hagopian, Rooker, and Zarcone (2015) evaluated a model for subtyping automatically reinforced self-injurious behavior (SIB) based on its sensitivity to changes in functional analysis conditions and the presence of self-restraint. The current study tested the generality of the model by applying it to all datasets of automatically reinforced SIB published from 1982 to 2015. We identified 49 datasets that included sufficient data to permit subtyping. Similar to the original study, Subtype-1 SIB was generally amenable to treatment using reinforcement alone, whereas Subtype-2 SIB was not. Conclusions could not be drawn about Subtype-3 SIB due to the small number of datasets. Nevertheless, the findings support the generality of the model and suggest that sensitivity of SIB to disruption by alternative reinforcement is an important dimension of automatically reinforced SIB. Findings also suggest that automatically reinforced SIB should no longer be considered a single category and that additional research is needed to better understand and treat Subtype-2 SIB. PMID:28032344

  11. Evaluation and Sensitivity Analysis of an Ocean Model Response to Hurricane Ivan (PREPRINT)

    DTIC Science & Technology

    2009-05-18

    analysis of upper-limb meridional overturning circulation interior ocean pathways in the tropical/subtropical Atlantic . In: Interhemispheric Water...diminishing returns are encountered when either resolution is increased. 3 1. Introduction Coupled ocean-atmosphere general circulation models have become...northwest Caribbean Sea 4 and GOM. Evaluation is difficult because ocean general circulation models incorporate a large suite of numerical algorithms

  12. Applying an intelligent model and sensitivity analysis to inspect mass transfer kinetics, shrinkage and crust color changes of deep-fat fried ostrich meat cubes.

    PubMed

    Amiryousefi, Mohammad Reza; Mohebbi, Mohebbat; Khodaiyan, Faramarz

    2014-01-01

    The objectives of this study were to use image analysis and artificial neural network (ANN) to predict mass transfer kinetics as well as color changes and shrinkage of deep-fat fried ostrich meat cubes. Two generalized feedforward networks were separately developed by using the operation conditions as inputs. Results based on the highest numerical quantities of the correlation coefficients between the experimental versus predicted values, showed proper fitting. Sensitivity analysis results of selected ANNs showed that among the input variables, frying temperature was the most sensitive to moisture content (MC) and fat content (FC) compared to other variables. Sensitivity analysis results of selected ANNs showed that MC and FC were the most sensitive to frying temperature compared to other input variables. Similarly, for the second ANN architecture, microwave power density was the most impressive variable having the maximum influence on both shrinkage percentage and color changes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Global sensitivity analysis for urban water quality modelling: Terminology, convergence and comparison of different methods

    NASA Astrophysics Data System (ADS)

    Vanrolleghem, Peter A.; Mannina, Giorgio; Cosenza, Alida; Neumann, Marc B.

    2015-03-01

    Sensitivity analysis represents an important step in improving the understanding and use of environmental models. Indeed, by means of global sensitivity analysis (GSA), modellers may identify both important (factor prioritisation) and non-influential (factor fixing) model factors. No general rule has yet been defined for verifying the convergence of the GSA methods. In order to fill this gap this paper presents a convergence analysis of three widely used GSA methods (SRC, Extended FAST and Morris screening) for an urban drainage stormwater quality-quantity model. After the convergence was achieved the results of each method were compared. In particular, a discussion on peculiarities, applicability, and reliability of the three methods is presented. Moreover, a graphical Venn diagram based classification scheme and a precise terminology for better identifying important, interacting and non-influential factors for each method is proposed. In terms of convergence, it was shown that sensitivity indices related to factors of the quantity model achieve convergence faster. Results for the Morris screening method deviated considerably from the other methods. Factors related to the quality model require a much higher number of simulations than the number suggested in literature for achieving convergence with this method. In fact, the results have shown that the term "screening" is improperly used as the method may exclude important factors from further analysis. Moreover, for the presented application the convergence analysis shows more stable sensitivity coefficients for the Extended-FAST method compared to SRC and Morris screening. Substantial agreement in terms of factor fixing was found between the Morris screening and Extended FAST methods. In general, the water quality related factors exhibited more important interactions than factors related to water quantity. Furthermore, in contrast to water quantity model outputs, water quality model outputs were found to be characterised by high non-linearity.

  14. Sensitivity Analysis of ProSEDS (Propulsive Small Expendable Deployer System) Data Communication System

    NASA Technical Reports Server (NTRS)

    Park, Nohpill; Reagan, Shawn; Franks, Greg; Jones, William G.

    1999-01-01

    This paper discusses analytical approaches to evaluating performance of Spacecraft On-Board Computing systems, thereby ultimately achieving a reliable spacecraft data communications systems. The sensitivity analysis approach of memory system on the ProSEDS (Propulsive Small Expendable Deployer System) as a part of its data communication system will be investigated. Also, general issues and possible approaches to reliable Spacecraft On-Board Interconnection Network and Processor Array will be shown. The performance issues of a spacecraft on-board computing systems such as sensitivity, throughput, delay and reliability will be introduced and discussed.

  15. Spacecraft design sensitivity for a disaster warning satellite system

    NASA Technical Reports Server (NTRS)

    Maloy, J. E.; Provencher, C. E.; Leroy, B. E.; Braley, R. C.; Shumaker, H. A.

    1977-01-01

    A disaster warning satellite (DWS) is described for warning the general public of impending natural catastrophes. The concept is responsive to NOAA requirements and maximizes the use of ATS-6 technology. Upon completion of concept development, the study was extended to establishing the sensitivity of the DWSS spacecraft power, weight, and cost to variations in both warning and conventional communications functions. The results of this sensitivity analysis are presented.

  16. Local influence for generalized linear models with missing covariates.

    PubMed

    Shi, Xiaoyan; Zhu, Hongtu; Ibrahim, Joseph G

    2009-12-01

    In the analysis of missing data, sensitivity analyses are commonly used to check the sensitivity of the parameters of interest with respect to the missing data mechanism and other distributional and modeling assumptions. In this article, we formally develop a general local influence method to carry out sensitivity analyses of minor perturbations to generalized linear models in the presence of missing covariate data. We examine two types of perturbation schemes (the single-case and global perturbation schemes) for perturbing various assumptions in this setting. We show that the metric tensor of a perturbation manifold provides useful information for selecting an appropriate perturbation. We also develop several local influence measures to identify influential points and test model misspecification. Simulation studies are conducted to evaluate our methods, and real datasets are analyzed to illustrate the use of our local influence measures.

  17. Analysis of quantum information processors using quantum metrology

    NASA Astrophysics Data System (ADS)

    Kandula, Mark J.; Kok, Pieter

    2018-06-01

    Physical implementations of quantum information processing devices are generally not unique, and we are faced with the problem of choosing the best implementation. Here, we consider the sensitivity of quantum devices to variations in their different components. To measure this, we adopt a quantum metrological approach and find that the sensitivity of a device to variations in a component has a particularly simple general form. We use the concept of cost functions to establish a general practical criterion to decide between two different physical implementations of the same quantum device consisting of a variety of components. We give two practical examples of sensitivities of quantum devices to variations in beam splitter transmittivities: the Knill-Laflamme-Milburn (KLM) and reverse nonlinear sign gates for linear optical quantum computing with photonic qubits, and the enhanced optical Bell detectors by Grice and Ewert and van Loock. We briefly compare the sensitivity to the diamond distance and find that the latter is less suited for studying the behavior of components embedded within the larger quantum device.

  18. Computer-Aided Communication Satellite System Analysis and Optimization.

    ERIC Educational Resources Information Center

    Stagl, Thomas W.; And Others

    Various published computer programs for fixed/broadcast communication satellite system synthesis and optimization are discussed. The rationale for selecting General Dynamics/Convair's Satellite Telecommunication Analysis and Modeling Program (STAMP) in modified form to aid in the system costing and sensitivity analysis work in the Program on…

  19. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling

    NASA Astrophysics Data System (ADS)

    Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.

    2017-05-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.

  20. A Geostatistics-Informed Hierarchical Sensitivity Analysis Method for Complex Groundwater Flow and Transport Modeling

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2017-12-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.

  1. NPV Sensitivity Analysis: A Dynamic Excel Approach

    ERIC Educational Resources Information Center

    Mangiero, George A.; Kraten, Michael

    2017-01-01

    Financial analysts generally create static formulas for the computation of NPV. When they do so, however, it is not readily apparent how sensitive the value of NPV is to changes in multiple interdependent and interrelated variables. It is the aim of this paper to analyze this variability by employing a dynamic, visually graphic presentation using…

  2. Performance and sensitivity analysis of the generalized likelihood ratio method for failure detection. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bueno, R. A.

    1977-01-01

    Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.

  3. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 1: Theory and numerical solution procedures

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  4. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    PubMed

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  5. A general method for handling missing binary outcome data in randomized controlled trials.

    PubMed

    Jackson, Dan; White, Ian R; Mason, Dan; Sutton, Stephen

    2014-12-01

    The analysis of randomized controlled trials with incomplete binary outcome data is challenging. We develop a general method for exploring the impact of missing data in such trials, with a focus on abstinence outcomes. We propose a sensitivity analysis where standard analyses, which could include 'missing = smoking' and 'last observation carried forward', are embedded in a wider class of models. We apply our general method to data from two smoking cessation trials. A total of 489 and 1758 participants from two smoking cessation trials. The abstinence outcomes were obtained using telephone interviews. The estimated intervention effects from both trials depend on the sensitivity parameters used. The findings differ considerably in magnitude and statistical significance under quite extreme assumptions about the missing data, but are reasonably consistent under more moderate assumptions. A new method for undertaking sensitivity analyses when handling missing data in trials with binary outcomes allows a wide range of assumptions about the missing data to be assessed. In two smoking cessation trials the results were insensitive to all but extreme assumptions. © 2014 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.

  6. Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.

    2007-01-01

    To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.

  7. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling: GEOSTATISTICAL SENSITIVITY ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Chen, Xingyuan; Ye, Ming

    Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level ofmore » the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.« less

  8. Analytic uncertainty and sensitivity analysis of models with input correlations

    NASA Astrophysics Data System (ADS)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  9. ASKI: A modular toolbox for scattering-integral-based seismic full waveform inversion and sensitivity analysis utilizing external forward codes

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang

    Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).

  10. A generalized matching law analysis of cocaine vs. food choice in rhesus monkeys: effects of candidate 'agonist-based' medications on sensitivity to reinforcement.

    PubMed

    Hutsell, Blake A; Negus, S Stevens; Banks, Matthew L

    2015-01-01

    We have previously demonstrated reductions in cocaine choice produced by either continuous 14-day phendimetrazine and d-amphetamine treatment or removing cocaine availability under a cocaine vs. food choice procedure in rhesus monkeys. The aim of the present investigation was to apply the concatenated generalized matching law (GML) to cocaine vs. food choice dose-effect functions incorporating sensitivity to both the relative magnitude and price of each reinforcer. Our goal was to determine potential behavioral mechanisms underlying pharmacological treatment efficacy to decrease cocaine choice. A multi-model comparison approach was used to characterize dose- and time-course effects of both pharmacological and environmental manipulations on sensitivity to reinforcement. GML models provided an excellent fit of the cocaine choice dose-effect functions in individual monkeys. Reductions in cocaine choice by both pharmacological and environmental manipulations were principally produced by systematic decreases in sensitivity to reinforcer price and non-systematic changes in sensitivity to reinforcer magnitude. The modeling approach used provides a theoretical link between the experimental analysis of choice and pharmacological treatments being evaluated as candidate 'agonist-based' medications for cocaine addiction. The analysis suggests that monoamine releaser treatment efficacy to decrease cocaine choice was mediated by selectively increasing the relative price of cocaine. Overall, the net behavioral effect of these pharmacological treatments was to increase substitutability of food pellets, a nondrug reinforcer, for cocaine. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Design sensitivity analysis and optimization tool (DSO) for sizing design applications

    NASA Technical Reports Server (NTRS)

    Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa

    1992-01-01

    The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.

  12. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  13. General Outcome Measures for Verbal Operants

    ERIC Educational Resources Information Center

    Kubina, Richard M., Jr.; Wolfe, Pamela; Kostewicz, Douglas E.

    2009-01-01

    A general outcome measure (GOM) can be used to show progress towards a long-term goal. GOMs should sample domains of behavior across ages, be sensitive to change over time, be inexpensive and easy to use, and facilitate decision making. Skinner's (1957) analysis of verbal behavior may benefit from the development of GOM. To develop GOM, we…

  14. A comparison of computer-assisted detection (CAD) programs for the identification of colorectal polyps: performance and sensitivity analysis, current limitations and practical tips for radiologists.

    PubMed

    Bell, L T O; Gandhi, S

    2018-06-01

    To directly compare the accuracy and speed of analysis of two commercially available computer-assisted detection (CAD) programs in detecting colorectal polyps. In this retrospective single-centre study, patients who had colorectal polyps identified on computed tomography colonography (CTC) and subsequent lower gastrointestinal endoscopy, were analysed using two commercially available CAD programs (CAD1 and CAD2). Results were compared against endoscopy to ascertain sensitivity and positive predictive value (PPV) for colorectal polyps. Time taken for CAD analysis was also calculated. CAD1 demonstrated a sensitivity of 89.8%, PPV of 17.6% and mean analysis time of 125.8 seconds. CAD2 demonstrated a sensitivity of 75.5%, PPV of 44.0% and mean analysis time of 84.6 seconds. The sensitivity and PPV for colorectal polyps and CAD analysis times can vary widely between current commercially available CAD programs. There is still room for improvement. Generally, there is a trade-off between sensitivity and PPV, and so further developments should aim to optimise both. Information on these factors should be made routinely available, so that an informed choice on their use can be made. This information could also potentially influence the radiologist's use of CAD results. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  15. Survey of methods for calculating sensitivity of general eigenproblems

    NASA Technical Reports Server (NTRS)

    Murthy, Durbha V.; Haftka, Raphael T.

    1987-01-01

    A survey of methods for sensitivity analysis of the algebraic eigenvalue problem for non-Hermitian matrices is presented. In addition, a modification of one method based on a better normalizing condition is proposed. Methods are classified as Direct or Adjoint and are evaluated for efficiency. Operation counts are presented in terms of matrix size, number of design variables and number of eigenvalues and eigenvectors of interest. The effect of the sparsity of the matrix and its derivatives is also considered, and typical solution times are given. General guidelines are established for the selection of the most efficient method.

  16. Discrete sensitivity derivatives of the Navier-Stokes equations with a parallel Krylov solver

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Taylor, Arthur C., III

    1994-01-01

    This paper solves an 'incremental' form of the sensitivity equations derived by differentiating the discretized thin-layer Navier Stokes equations with respect to certain design variables of interest. The equations are solved with a parallel, preconditioned Generalized Minimal RESidual (GMRES) solver on a distributed-memory architecture. The 'serial' sensitivity analysis code is parallelized by using the Single Program Multiple Data (SPMD) programming model, domain decomposition techniques, and message-passing tools. Sensitivity derivatives are computed for low and high Reynolds number flows over a NACA 1406 airfoil on a 32-processor Intel Hypercube, and found to be identical to those computed on a single-processor Cray Y-MP. It is estimated that the parallel sensitivity analysis code has to be run on 40-50 processors of the Intel Hypercube in order to match the single-processor processing time of a Cray Y-MP.

  17. LSENS, A General Chemical Kinetics and Sensitivity Analysis Code for Homogeneous Gas-Phase Reactions. Part 2; Code Description and Usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part II of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part II describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part I (NASA RP-1328) derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved by LSENS. Part III (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  18. Measuring utilities of severe facial disfigurement and composite tissue allotransplantation of the face in patients with severe face and neck burns from the perspectives of the general public, medical experts and patients.

    PubMed

    Chuback, Jennifer; Yarascavitch, Blake; Yarascavitch, Alec; Kaur, Manraj Nirmal; Martin, Stuart; Thoma, Achilleas

    2015-11-01

    In an otherwise healthy patient with severe facial disfigurement secondary to burns, composite tissue allotransplantation (CTA) results in life-long immunosuppressive therapy and its associated risk. In this study, we assess the net gain of CTA of face (in terms of utilities) from the perspectives of patient, general public and medical expert, in comparison to the risks. Using the standard gamble (SG) and time-trade off (TTO) techniques, utilities were obtained from members of general public, patients with facial burns, and medical experts (n=25 for each group). The gain (or loss) in utility and quality adjusted life years (QALY) were estimated using face-to-face interviews. A sensitivity analysis using variable life expectancy was conducted. From the patient perspective, severe facial burn was associated with a health utility value of 0.53, and 27.1 QALYs as calculated by SG, and a health utility value of 0.57, and 28.9 QALYs as calculated by TTO. In comparison, CTA of the face was associated with a health utility value of 0.64, and 32.3 QALYs (or 18.2 QALYs years per sensitivity analysis) as calculated by SG, and a health utility value of 0.67, and 34.1 QALYs (or 19.2QALYs per sensitivity analysis) as calculated by TTO. However, a loss of 8.9 QALYs (by SG method) to 9.5 QALYs (by TTO method) was observed when the life expectancy was decreased in the sensitivity analysis. Similar results were obtained from the general population and medical experts perspectives. We found that severe facial disfigurement is associated with a significant reduction in the health-related quality of life, and CTA has the potential to improve this. Further, we found that a trade-off exists between the life expectancy and gain in the QALYs, i.e. if life expectancy following CTA of face is reduced, the gain in QALY is also diminished. This trade-off needs to be validated in future studies. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.

  19. Analysis techniques for multivariate root loci. [a tool in linear control systems

    NASA Technical Reports Server (NTRS)

    Thompson, P. M.; Stein, G.; Laub, A. J.

    1980-01-01

    Analysis and techniques are developed for the multivariable root locus and the multivariable optimal root locus. The generalized eigenvalue problem is used to compute angles and sensitivities for both types of loci, and an algorithm is presented that determines the asymptotic properties of the optimal root locus.

  20. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.

  1. Design sensitivity analysis of rotorcraft airframe structures for vibration reduction

    NASA Technical Reports Server (NTRS)

    Murthy, T. Sreekanta

    1987-01-01

    Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.

  2. A new framework for comprehensive, robust, and efficient global sensitivity analysis: 1. Theory

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin V.

    2016-01-01

    Computer simulation models are continually growing in complexity with increasingly more factors to be identified. Sensitivity Analysis (SA) provides an essential means for understanding the role and importance of these factors in producing model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to "variogram analysis," that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. Synthetic functions that resemble actual model response surfaces are used to illustrate the concepts, and show VARS to be as much as two orders of magnitude more computationally efficient than the state-of-the-art Sobol approach. In a companion paper, we propose a practical implementation strategy, and demonstrate the effectiveness, efficiency, and reliability (robustness) of the VARS framework on real-data case studies.

  3. Empirically Derived and Simulated Sensitivity of Vegetation to Climate Across Global Gradients of Temperature and Precipitation

    NASA Astrophysics Data System (ADS)

    Quetin, G. R.; Swann, A. L. S.

    2017-12-01

    Successfully predicting the state of vegetation in a novel environment is dependent on our process level understanding of the ecosystem and its interactions with the environment. We derive a global empirical map of the sensitivity of vegetation to climate using the response of satellite-observed greenness and leaf area to interannual variations in temperature and precipitation. Our analysis provides observations of ecosystem functioning; the vegetation interactions with the physical environment, across a wide range of climates and provide a functional constraint for hypotheses engendered in process-based models. We infer mechanisms constraining ecosystem functioning by contrasting how the observed and simulated sensitivity of vegetation to climate varies across climate space. Our analysis yields empirical evidence for multiple physical and biological mediators of the sensitivity of vegetation to climate as a systematic change across climate space. Our comparison of remote sensing-based vegetation sensitivity with modeled estimates provides evidence for which physiological mechanisms - photosynthetic efficiency, respiration, water supply, atmospheric water demand, and sunlight availability - dominate the ecosystem functioning in places with different climates. Earth system models are generally successful in reproducing the broad sign and shape of ecosystem functioning across climate space. However, this general agreement breaks down in hot wet climates where models simulate less leaf area during a warmer year, while observations show a mixed response but overall more leaf area during warmer years. In addition, simulated ecosystem interaction with temperature is generally larger and changes more rapidly across a gradient of temperature than is observed. We hypothesize that the amplified interaction and change are both due to a lack of adaptation and acclimation in simulations. This discrepancy with observations suggests that simulated responses of vegetation to global warming, and feedbacks between vegetation and climate, are too strong in the models.

  4. Global sensitivity analysis of multiscale properties of porous materials

    NASA Astrophysics Data System (ADS)

    Um, Kimoon; Zhang, Xuan; Katsoulakis, Markos; Plechac, Petr; Tartakovsky, Daniel M.

    2018-02-01

    Ubiquitous uncertainty about pore geometry inevitably undermines the veracity of pore- and multi-scale simulations of transport phenomena in porous media. It raises two fundamental issues: sensitivity of effective material properties to pore-scale parameters and statistical parameterization of Darcy-scale models that accounts for pore-scale uncertainty. Homogenization-based maps of pore-scale parameters onto their Darcy-scale counterparts facilitate both sensitivity analysis (SA) and uncertainty quantification. We treat uncertain geometric characteristics of a hierarchical porous medium as random variables to conduct global SA and to derive probabilistic descriptors of effective diffusion coefficients and effective sorption rate. Our analysis is formulated in terms of solute transport diffusing through a fluid-filled pore space, while sorbing to the solid matrix. Yet it is sufficiently general to be applied to other multiscale porous media phenomena that are amenable to homogenization.

  5. Failure Bounding And Sensitivity Analysis Applied To Monte Carlo Entry, Descent, And Landing Simulations

    NASA Technical Reports Server (NTRS)

    Gaebler, John A.; Tolson, Robert H.

    2010-01-01

    In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.

  6. LSENS, a general chemical kinetics and sensitivity analysis code for homogeneous gas-phase reactions. 2: Code description and usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  7. Computing sensitivity and selectivity in parallel factor analysis and related multiway techniques: the need for further developments in net analyte signal theory.

    PubMed

    Olivieri, Alejandro C

    2005-08-01

    Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.

  8. Working covariance model selection for generalized estimating equations.

    PubMed

    Carey, Vincent J; Wang, You-Gan

    2011-11-20

    We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.

  9. Birth weight, current anthropometric markers, and high sensitivity C-reactive protein in Brazilian school children.

    PubMed

    Boscaini, Camile; Pellanda, Lucia Campos

    2015-01-01

    Studies have shown associations of birth weight with increased concentrations of high sensitivity C-reactive protein. This study assessed the relationship between birth weight, anthropometric and metabolic parameters during childhood, and high sensitivity C-reactive protein. A total of 612 Brazilian school children aged 5-13 years were included in the study. High sensitivity C-reactive protein was measured by particle-enhanced immunonephelometry. Nutritional status was assessed by body mass index, waist circumference, and skinfolds. Total cholesterol and fractions, triglycerides, and glucose were measured by enzymatic methods. Insulin sensitivity was determined by the homeostasis model assessment method. Statistical analysis included chi-square test, General Linear Model, and General Linear Model for Gamma Distribution. Body mass index, waist circumference, and skinfolds were directly associated with birth weight (P < 0.001, P = 0.001, and P = 0.015, resp.). Large for gestational age children showed higher high sensitivity C-reactive protein levels (P < 0.001) than small for gestational age. High birth weight is associated with higher levels of high sensitivity C-reactive protein, body mass index, waist circumference, and skinfolds. Large for gestational age altered high sensitivity C-reactive protein and promoted additional risk factor for atherosclerosis in these school children, independent of current nutritional status.

  10. Application of global sensitivity analysis methods to Takagi-Sugeno-Kang rainfall-runoff fuzzy models

    NASA Astrophysics Data System (ADS)

    Jacquin, A. P.; Shamseldin, A. Y.

    2009-04-01

    This study analyses the sensitivity of the parameters of Takagi-Sugeno-Kang rainfall-runoff fuzzy models previously developed by the authors. These models can be classified in two types, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity in the rainfall-runoff relationship. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis (RSA) and Sobol's Variance Decomposition (SVD). In general, the RSA method has the disadvantage of not being able to detect sensitivities arising from parameter interactions. By contrast, the SVD method is suitable for analysing models where the model response surface is expected to be affected by interactions at a local scale and/or local optima, such as the case of the rainfall-runoff fuzzy models analysed in this study. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of two measures of goodness of fit, assessing the model performance from different points of view. These measures are the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the study show that the sensitivity of the model parameters depends on both the type of non-linear effects (i.e. changes in catchment wetness or seasonality) that dominates the catchment's rainfall-runoff relationship and the measure used to assess the model performance. Acknowledgements: This research was supported by FONDECYT, Research Grant 11070130. We would also like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.

  11. The Sensitivity of a Global Ocean Model to Wind Forcing: A Test Using Sea Level and Wind Observations from Satellites and Operational Analysis

    NASA Technical Reports Server (NTRS)

    Fu, L. L.; Chao, Y.

    1997-01-01

    Investigated in this study is the response of a global ocean general circulation model to forcing provided by two wind products: operational analysis from the National Center for Environmental Prediction (NCEP); observations made by the ERS-1 radar scatterometer.

  12. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  13. An Attempt of Formalizing the Selection Parameters for Settlements Generalization in Small-Scales

    NASA Astrophysics Data System (ADS)

    Karsznia, Izabela

    2014-12-01

    The paper covers one of the most important problems concerning context-sensitive settlement selection for the purpose of the small-scale maps. So far, no formal parameters for small-scale settlements generalization have been specified, hence the problem seems to be an important and innovative challenge. It is also crucial from the practical point of view as it is necessary to develop appropriate generalization algorithms for the purpose of the General Geographic Objects Database generalization which is the essential Spatial Data Infrastructure component in Poland. The author proposes and verifies quantitative generalization parameters for the purpose of the settlement selection process in small-scale maps. The selection of settlements was carried out in two research areas - in Lower Silesia and Łódź Province. Based on the conducted analysis appropriate contextual-sensitive settlements selection parameters have been defined. Particular effort has been made to develop a methodology of quantitative settlements selection which would be useful in the automation processes and that would make it possible to keep specifics of generalized objects unchanged.

  14. Lexical prosody beyond first-language boundary: Chinese lexical tone sensitivity predicts English reading comprehension.

    PubMed

    Choi, William; Tong, Xiuli; Cain, Kate

    2016-08-01

    This 1-year longitudinal study examined the role of Cantonese lexical tone sensitivity in predicting English reading comprehension and the pathways underlying their relation. Multiple measures of Cantonese lexical tone sensitivity, English lexical stress sensitivity, Cantonese segmental phonological awareness, general auditory sensitivity, English word reading, and English reading comprehension were administered to 133 Cantonese-English unbalanced bilingual second graders. Structural equation modeling analysis identified transfer of Cantonese lexical tone sensitivity to English reading comprehension. This transfer was realized through a direct pathway via English stress sensitivity and also an indirect pathway via English word reading. These results suggest that prosodic sensitivity is an important factor influencing English reading comprehension and that it needs to be incorporated into theoretical accounts of reading comprehension across languages. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Sensitivity Analysis of Stability Problems of Steel Structures using Shell Finite Elements and Nonlinear Computation Methods

    NASA Astrophysics Data System (ADS)

    Kala, Zdeněk; Kala, Jiří

    2011-09-01

    The main focus of the paper is the analysis of the influence of residual stress on the ultimate limit state of a hot-rolled member in compression. The member was modelled using thin-walled elements of type SHELL 181 and meshed in the programme ANSYS. Geometrical and material non-linear analysis was used. The influence of residual stress was studied using variance-based sensitivity analysis. In order to obtain more general results, the non-dimensional slenderness was selected as a study parameter. Comparison of the influence of the residual stress with the influence of other dominant imperfections is illustrated in the conclusion of the paper. All input random variables were considered according to results of experimental research.

  16. Jet-A reaction mechanism study for combustion application

    NASA Technical Reports Server (NTRS)

    Lee, Chi-Ming; Kundu, Krishna; Acosta, Waldo

    1991-01-01

    Simplified chemical kinetic reaction mechanisms for the combustion of Jet A fuel was studied. Initially, 40 reacting species and 118 elementary chemical reactions were chosen based on a literature review. Through a sensitivity analysis with the use of LSENS General Kinetics and Sensitivity Analysis Code, 16 species and 21 elementary chemical reactions were determined from this study. This mechanism is first justified by comparison of calculated ignition delay time with the available shock tube data, then it is validated by comparison of calculated emissions from the plug flow reactor code with in-house flame tube data.

  17. Program Helps To Determine Chemical-Reaction Mechanisms

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.; Radhakrishnan, K.

    1995-01-01

    General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code developed for use in solving complex, homogeneous, gas-phase, chemical-kinetics problems. Provides for efficient and accurate chemical-kinetics computations and provides for sensitivity analysis for variety of problems, including problems involving honisothermal conditions. Incorporates mathematical models for static system, steady one-dimensional inviscid flow, reaction behind incident shock wave (with boundary-layer correction), and perfectly stirred reactor. Computations of equilibrium properties performed for following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. Written in FORTRAN 77 with exception of NAMELIST extensions used for input.

  18. [Criterion Validity of the German Version of the CES-D in the General Population].

    PubMed

    Jahn, Rebecca; Baumgartner, Josef S; van den Nest, Miriam; Friedrich, Fabian; Alexandrowicz, Rainer W; Wancata, Johannes

    2018-04-17

    The "Center of Epidemiologic Studies - Depression scale" (CES-D) is a well-known screening tool for depression. Until now the criterion validity of the German version of the CES-D was not investigated in a sample of the adult general population. 508 study participants of the Austrian general population completed the CES-D. ICD-10 diagnoses were established by using the Schedules for Clinical Assessment in Neuropsychiatry (SCAN). Receiver Operating Characteristics (ROC) analysis was conducted. Possible gender differences were explored. Overall discriminating performance of the CES-D was sufficient (ROC-AUC 0,836). Using the traditional cut-off values of 15/16 and 21/22 respectively the sensitivity was 43.2 % and 32.4 %, respectively. The cut-off value developed on the basis of our sample was 9/10 with a sensitivity of 81.1 % und a specificity of 74.3 %. There were no significant gender differences. This is the first study investigating the criterion validity of the German version of the CES-D in the general population. The optimal cut-off values yielded sufficient sensitivity and specificity, comparable to the values of other screening tools. © Georg Thieme Verlag KG Stuttgart · New York.

  19. What Do We Mean By Sensitivity Analysis? The Need For A Comprehensive Characterization Of Sensitivity In Earth System Models

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Gupta, H. V.

    2014-12-01

    Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.

  20. Classification of Phase Transitions by Microcanonical Inflection-Point Analysis

    NASA Astrophysics Data System (ADS)

    Qi, Kai; Bachmann, Michael

    2018-05-01

    By means of the principle of minimal sensitivity we generalize the microcanonical inflection-point analysis method by probing derivatives of the microcanonical entropy for signals of transitions in complex systems. A strategy of systematically identifying and locating independent and dependent phase transitions of any order is proposed. The power of the generalized method is demonstrated in applications to the ferromagnetic Ising model and a coarse-grained model for polymer adsorption onto a substrate. The results shed new light on the intrinsic phase structure of systems with cooperative behavior.

  1. Variogram Analysis of Response surfaces (VARS): A New Framework for Global Sensitivity Analysis of Earth and Environmental Systems Models

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Gupta, H. V.

    2015-12-01

    Earth and environmental systems models (EESMs) are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. Complexity and dimensionality are manifested by introducing many different factors in EESMs (i.e., model parameters, forcings, boundary conditions, etc.) to be identified. Sensitivity Analysis (SA) provides an essential means for characterizing the role and importance of such factors in producing the model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to 'variogram analysis', that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are limiting cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  2. Biased and less sensitive: A gamified approach to delay discounting in heroin addiction.

    PubMed

    Scherbaum, Stefan; Haber, Paul; Morley, Kirsten; Underhill, Dylan; Moustafa, Ahmed A

    2018-03-01

    People with addiction will continue to use drugs despite adverse long-term consequences. We hypothesized (a) that this deficit persists during substitution treatment, and (b) that this deficit might be related not only to a desire for immediate gratification, but also to a lower sensitivity for optimal decision making. We investigated how individuals with a history of heroin addiction perform (compared to healthy controls) in a virtual reality delay discounting task. This novel task adds to established measures of delay discounting an assessment of the optimality of decisions, especially in how far decisions are influenced by a general choice bias and/or a reduced sensitivity to the relative value of the two alternative rewards. We used this measure of optimality to apply diffusion model analysis to the behavioral data to analyze the interaction between decision optimality and reaction time. The addiction group consisted of 25 patients with a history of heroin dependency currently participating in a methadone maintenance program; the control group consisted of 25 healthy participants with no history of substance abuse, who were recruited from the Western Sydney community. The patient group demonstrated greater levels of delay discounting compared to the control group, which is broadly in line with previous observations. Diffusion model analysis yielded a reduced sensitivity for the optimality of a decision in the patient group compared to the control group. This reduced sensitivity was reflected in lower rates of information accumulation and higher decision criteria. Increased discounting in individuals with heroin addiction is related not only to a generally increased bias to immediate gratification, but also to reduced sensitivity for the optimality of a decision. This finding is in line with other findings about the sensitivity of addicts in distinguishing optimal from nonoptimal choice options.

  3. Population and High-Risk Group Screening for Glaucoma: The Los Angeles Latino Eye Study

    PubMed Central

    Francis, Brian A.; Vigen, Cheryl; Lai, Mei-Ying; Winarko, Jonathan; Nguyen, Betsy; Azen, Stanley

    2011-01-01

    Purpose. To evaluate the ability of various screening tests, both individually and in combination, to detect glaucoma in the general Latino population and high-risk subgroups. Methods. The Los Angeles Latino Eye Study is a population-based study of eye disease in Latinos 40 years of age and older. Participants (n = 6082) underwent Humphrey visual field testing (HVF), frequency doubling technology (FDT) perimetry, measurement of intraocular pressure (IOP) and central corneal thickness (CCT), and independent assessment of optic nerve vertical cup disc (C/D) ratio. Screening parameters were evaluated for three definitions of glaucoma based on optic disc, visual field, and a combination of both. Analyses were also conducted for high-risk subgroups (family history of glaucoma, diabetes mellitus, and age ≥65 years). Sensitivity, specificity, and receiver operating characteristic curves were calculated for those continuous parameters independently associated with glaucoma. Classification and regression tree (CART) analysis was used to develop a multivariate algorithm for glaucoma screening. Results. Preset cutoffs for screening parameters yielded a generally poor balance of sensitivity and specificity (sensitivity/specificity for IOP ≥21 mm Hg and C/D ≥0.8 was 0.24/0.97 and 0.60/0.98, respectively). Assessment of high-risk subgroups did not improve the sensitivity/specificity of individual screening parameters. A CART analysis using multiple screening parameters—C/D, HVF, and IOP—substantially improved the balance of sensitivity and specificity (sensitivity/specificity 0.92/0.92). Conclusions. No single screening parameter is useful for glaucoma screening. However, a combination of vertical C/D ratio, HVF, and IOP provides the best balance of sensitivity/specificity and is likely to provide the highest yield in glaucoma screening programs. PMID:21245400

  4. Sensitivity analysis in economic evaluation: an audit of NICE current practice and a review of its use and value in decision-making.

    PubMed

    Andronis, L; Barton, P; Bryan, S

    2009-06-01

    To determine how we define good practice in sensitivity analysis in general and probabilistic sensitivity analysis (PSA) in particular, and to what extent it has been adhered to in the independent economic evaluations undertaken for the National Institute for Health and Clinical Excellence (NICE) over recent years; to establish what policy impact sensitivity analysis has in the context of NICE, and policy-makers' views on sensitivity analysis and uncertainty, and what use is made of sensitivity analysis in policy decision-making. Three major electronic databases, MEDLINE, EMBASE and the NHS Economic Evaluation Database, were searched from inception to February 2008. The meaning of 'good practice' in the broad area of sensitivity analysis was explored through a review of the literature. An audit was undertaken of the 15 most recent NICE multiple technology appraisal judgements and their related reports to assess how sensitivity analysis has been undertaken by independent academic teams for NICE. A review of the policy and guidance documents issued by NICE aimed to assess the policy impact of the sensitivity analysis and the PSA in particular. Qualitative interview data from NICE Technology Appraisal Committee members, collected as part of an earlier study, were also analysed to assess the value attached to the sensitivity analysis components of the economic analyses conducted for NICE. All forms of sensitivity analysis, notably both deterministic and probabilistic approaches, have their supporters and their detractors. Practice in relation to univariate sensitivity analysis is highly variable, with considerable lack of clarity in relation to the methods used and the basis of the ranges employed. In relation to PSA, there is a high level of variability in the form of distribution used for similar parameters, and the justification for such choices is rarely given. Virtually all analyses failed to consider correlations within the PSA, and this is an area of concern. Uncertainty is considered explicitly in the process of arriving at a decision by the NICE Technology Appraisal Committee, and a correlation between high levels of uncertainty and negative decisions was indicated. The findings suggest considerable value in deterministic sensitivity analysis. Such analyses serve to highlight which model parameters are critical to driving a decision. Strong support was expressed for PSA, principally because it provides an indication of the parameter uncertainty around the incremental cost-effectiveness ratio. The review and the policy impact assessment focused exclusively on documentary evidence, excluding other sources that might have revealed further insights on this issue. In seeking to address parameter uncertainty, both deterministic and probabilistic sensitivity analyses should be used. It is evident that some cost-effectiveness work, especially around the sensitivity analysis components, represents a challenge in making it accessible to those making decisions. This speaks to the training agenda for those sitting on such decision-making bodies, and to the importance of clear presentation of analyses by the academic community.

  5. Shape sensitivity analysis of flutter response of a laminated wing

    NASA Technical Reports Server (NTRS)

    Bergen, Fred D.; Kapania, Rakesh K.

    1988-01-01

    A method is presented for calculating the shape sensitivity of a wing aeroelastic response with respect to changes in geometric shape. Yates' modified strip method is used in conjunction with Giles' equivalent plate analysis to predict the flutter speed, frequency, and reduced frequency of the wing. Three methods are used to calculate the sensitivity of the eigenvalue. The first method is purely a finite difference calculation of the eigenvalue derivative directly from the solution of the flutter problem corresponding to the two different values of the shape parameters. The second method uses an analytic expression for the eigenvalue sensitivities of a general complex matrix, where the derivatives of the aerodynamic, mass, and stiffness matrices are computed using a finite difference approximation. The third method also uses an analytic expression for the eigenvalue sensitivities, but the aerodynamic matrix is computed analytically. All three methods are found to be in good agreement with each other. The sensitivities of the eigenvalues were used to predict the flutter speed, frequency, and reduced frequency. These approximations were found to be in good agreement with those obtained using a complete reanalysis.

  6. Analysis of DNA methylation in Arabidopsis thaliana based on methylation-sensitive AFLP markers.

    PubMed

    Cervera, M T; Ruiz-García, L; Martínez-Zapater, J M

    2002-12-01

    AFLP analysis using restriction enzyme isoschizomers that differ in their sensitivity to methylation of their recognition sites has been used to analyse the methylation state of anonymous CCGG sequences in Arabidopsis thaliana. The technique was modified to improve the quality of fingerprints and to visualise larger numbers of scorable fragments. Sequencing of amplified fragments indicated that detection was generally associated with non-methylation of the cytosine to which the isoschizomer is sensitive. Comparison of EcoRI/ HpaII and EcoRI/ MspI patterns in different ecotypes revealed that 35-43% of CCGG sites were differentially digested by the isoschizomers. Interestingly, the pattern of digestion among different plants belonging to the same ecotype is highly conserved, with the rate of intra-ecotype methylation-sensitive polymorphisms being less than 1%. However, pairwise comparisons of methylation patterns between samples belonging to different ecotypes revealed differences in up to 34% of the methylation-sensitive polymorphisms. The lack of correlation between inter-ecotype similarity matrices based on methylation-insensitive or methylation-sensitive polymorphisms suggests that whatever the mechanisms regulating methylation may be, they are not related to nucleotide sequence variation.

  7. Sensitivity of the reference evapotranspiration to key climatic variables during the growing season in the Ejina oasis northwest China.

    PubMed

    Hou, Lan-Gong; Zou, Song-Bing; Xiao, Hong-Lang; Yang, Yong-Gang

    2013-01-01

    The standardized FAO56 Penman-Monteith model, which has been the most reasonable method in both humid and arid climatic conditions, provides reference evapotranspiration (ETo) estimates for planning and efficient use of agricultural water resources. And sensitivity analysis is important in understanding the relative importance of climatic variables to the variation of reference evapotranspiration. In this study, a non-dimensional relative sensitivity coefficient was employed to predict responses of ETo to perturbations of four climatic variables in the Ejina oasis northwest China. A 20-year historical dataset of daily air temperature, wind speed, relative humidity and daily sunshine duration in the Ejina oasis was used in the analysis. Results have shown that daily sensitivity coefficients exhibited large fluctuations during the growing season, and shortwave radiation was the most sensitive variable in general for the Ejina oasis, followed by air temperature, wind speed and relative humidity. According to this study, the response of ETo can be preferably predicted under perturbation of air temperature, wind speed, relative humidity and shortwave radiation by their sensitivity coefficients.

  8. Adjoint-based sensitivity analysis of low-order thermoacoustic networks using a wave-based approach

    NASA Astrophysics Data System (ADS)

    Aguilar, José G.; Magri, Luca; Juniper, Matthew P.

    2017-07-01

    Strict pollutant emission regulations are pushing gas turbine manufacturers to develop devices that operate in lean conditions, with the downside that combustion instabilities are more likely to occur. Methods to predict and control unstable modes inside combustion chambers have been developed in the last decades but, in some cases, they are computationally expensive. Sensitivity analysis aided by adjoint methods provides valuable sensitivity information at a low computational cost. This paper introduces adjoint methods and their application in wave-based low order network models, which are used as industrial tools, to predict and control thermoacoustic oscillations. Two thermoacoustic models of interest are analyzed. First, in the zero Mach number limit, a nonlinear eigenvalue problem is derived, and continuous and discrete adjoint methods are used to obtain the sensitivities of the system to small modifications. Sensitivities to base-state modification and feedback devices are presented. Second, a more general case with non-zero Mach number, a moving flame front and choked outlet, is presented. The influence of the entropy waves on the computed sensitivities is shown.

  9. Optimization Issues with Complex Rotorcraft Comprehensive Analysis

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.; Tarzanin, Frank J.; Hirsh, Joel E.; Young, Darrell K.

    1998-01-01

    This paper investigates the use of the general purpose automatic differentiation (AD) tool called Automatic Differentiation of FORTRAN (ADIFOR) as a means of generating sensitivity derivatives for use in Boeing Helicopter's proprietary comprehensive rotor analysis code (VII). ADIFOR transforms an existing computer program into a new program that performs a sensitivity analysis in addition to the original analysis. In this study both the pros (exact derivatives, no step-size problems) and cons (more CPU, more memory) of ADIFOR are discussed. The size (based on the number of lines) of the VII code after ADIFOR processing increased by 70 percent and resulted in substantial computer memory requirements at execution. The ADIFOR derivatives took about 75 percent longer to compute than the finite-difference derivatives. However, the ADIFOR derivatives are exact and are not functions of step-size. The VII sensitivity derivatives generated by ADIFOR are compared with finite-difference derivatives. The ADIFOR and finite-difference derivatives are used in three optimization schemes to solve a low vibration rotor design problem.

  10. Perturbation Selection and Local Influence Analysis for Nonlinear Structural Equation Model

    ERIC Educational Resources Information Center

    Chen, Fei; Zhu, Hong-Tu; Lee, Sik-Yum

    2009-01-01

    Local influence analysis is an important statistical method for studying the sensitivity of a proposed model to model inputs. One of its important issues is related to the appropriate choice of a perturbation vector. In this paper, we develop a general method to select an appropriate perturbation vector and a second-order local influence measure…

  11. Heat-Energy Analysis for Solar Receivers

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.

    1982-01-01

    Heat-energy analysis program (HEAP) solves general heat-transfer problems, with some specific features that are "custom made" for analyzing solar receivers. Can be utilized not only to predict receiver performance under varying solar flux, ambient temperature and local heat-transfer rates but also to detect locations of hotspots and metallurgical difficulties and to predict performance sensitivity of neighboring component parameters.

  12. [Screening for cancer - economic consideration and cost-effectiveness].

    PubMed

    Kjellberg, Jakob

    2014-06-09

    Cost-effectiveness analysis has become an accepted method to evaluate medical technology and allocate scarce health-care resources. Published decision analyses show that screening for cancer in general is cost-effective. However, cost-effectiveness analyses are only as good as the clinical data and the results are sensitive to the chosen methods and perspective of the analysis.

  13. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.

  14. Neural reactivity to monetary rewards and losses differentiates social from generalized anxiety in children.

    PubMed

    Kessel, Ellen M; Kujawa, Autumn; Hajcak Proudfit, Greg; Klein, Daniel N

    2015-07-01

    The relationship between reward sensitivity and pediatric anxiety is poorly understood. Evidence suggests that alterations in reward processing are more characteristic of depressive than anxiety disorders. However, some studies have reported that anxiety disorders are also associated with perturbations in reward processing. Heterogeneity in the forms of anxiety studied may account for the differences between studies. We used the feedback-negativity, an event-related potential sensitive to monetary gains versus losses (ΔFN), to examine whether different forms of youth anxiety symptoms were uniquely associated with reward sensitivity as indexed by neural reactivity to the receipt of positive and negative monetary outcomes. Participants were 390, eight- to ten-year-old children (175 females) from a large community sample. The ΔFN was measured during a monetary reward task. Self-reports of child anxiety and depression symptoms and temperamental positive emotionality (PE) were obtained. Multiple regression analysis revealed that social anxiety and generalized anxiety symptoms were unique predictors of reward sensitivity after accounting for concurrent depressive symptoms and PE. While social anxiety was associated with a greater ΔFN, generalized anxiety was associated with a reduced ΔFN. Different symptom dimensions of child anxiety are differentially related to alterations in reward sensitivity. This may, in part, explain inconsistent findings in the literature regarding reward processing in anxiety. © 2014 Association for Child and Adolescent Mental Health.

  15. Error modeling and sensitivity analysis of a parallel robot with SCARA(selective compliance assembly robot arm) motions

    NASA Astrophysics Data System (ADS)

    Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua

    2014-07-01

    Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.

  16. Shape optimization using a NURBS-based interface-enriched generalized FEM

    DOE PAGES

    Najafi, Ahmad R.; Safdari, Masoud; Tortorelli, Daniel A.; ...

    2016-11-26

    This study presents a gradient-based shape optimization over a fixed mesh using a non-uniform rational B-splines-based interface-enriched generalized finite element method, applicable to multi-material structures. In the proposed method, non-uniform rational B-splines are used to parameterize the design geometry precisely and compactly by a small number of design variables. An analytical shape sensitivity analysis is developed to compute derivatives of the objective and constraint functions with respect to the design variables. Subtle but important new terms involve the sensitivity of shape functions and their spatial derivatives. As a result, verification and illustrative problems are solved to demonstrate the precision andmore » capability of the method.« less

  17. Structural optimization: Status and promise

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.

    Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)

  18. An easily implemented static condensation method for structural sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gangadharan, S. N.; Haftka, R. T.; Nikolaidis, E.

    1990-01-01

    A black-box approach to static condensation for sensitivity analysis is presented with illustrative examples of a cube and a car structure. The sensitivity of the structural response with respect to joint stiffness parameter is calculated using the direct method, forward-difference, and central-difference schemes. The efficiency of the various methods for identifying joint stiffness parameters from measured static deflections of these structures is compared. The results indicate that the use of static condensation can reduce computation times significantly and the black-box approach is only slightly less efficient than the standard implementation of static condensation. The ease of implementation of the black-box approach recommends it for use with general-purpose finite element codes that do not have a built-in facility for static condensation.

  19. Specific conditions of distress in the dental situation.

    PubMed

    Hentschel, U; Allander, L; Winholt, A S

    1977-01-01

    The general feeling of distress in the dental situation has been studied in 60 female dental patients and correlated to the following variables: Experimentally evaluated sensitivity to pain, self-rating and the dentist's rating of sensitivity to pain, the pain-threshold value in the teeth, the need of local anesthesia, extraversion-introversion, neuroticism, and some percept-genetic psychological measures of adaptive behavior. The subjects have also answered a questionnaire for grading their distress in regard to different aspects of the treatment-situation, which were combined into eight groups using factor analysis and then correlated to the general distress. The variables having a significant relation to distress in the dental situation were: the dentist's rating of the patient's sensitivity, the need of anesthesia, four groups of treatment-components and two of the percept-genetic measures. There was also a certain relation to the pain threshold in the teeth.

  20. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  1. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE PAGES

    Dai, Heng; Ye, Ming; Walker, Anthony P.; ...

    2017-03-28

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  2. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  3. LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    2000-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).

  4. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  5. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  6. Maintaining gender sensitivity in the family practice: facilitators and barriers.

    PubMed

    Celik, Halime; Lagro-Janssen, Toine; Klinge, Ineke; van der Weijden, Trudy; Widdershoven, Guy

    2009-12-01

    This study aims to identify the facilitators and barriers perceived by General Practitioners (GPs) to maintain a gender perspective in family practice. Nine semi-structured interviews were conducted among nine pairs of GPs. The data were analysed by means of deductive content analysis using theory-based methods to generate facilitators and barriers to gender sensitivity. Gender sensitivity in family practice can be influenced by several factors which ultimately determine the extent to which a gender sensitive approach is satisfactorily practiced by GPs in the doctor-patient relationship. Gender awareness, repetition and reminders, motivation triggers and professional guidelines were found to facilitate gender sensitivity. On the other hand, lacking skills and routines, scepticism, heavy workload and the timing of implementation were found to be barriers to gender sensitivity. While the potential effect of each factor affecting gender sensitivity in family practice has been elucidated, the effects of the interplay between these factors still need to be determined.

  7. Emerging spectra of singular correlation matrices under small power-map deformations

    NASA Astrophysics Data System (ADS)

    Vinayak; Schäfer, Rudi; Seligman, Thomas H.

    2013-09-01

    Correlation matrices are a standard tool in the analysis of the time evolution of complex systems in general and financial markets in particular. Yet most analysis assume stationarity of the underlying time series. This tends to be an assumption of varying and often dubious validity. The validity of the assumption improves as shorter time series are used. If many time series are used, this implies an analysis of highly singular correlation matrices. We attack this problem by using the so-called power map, which was introduced to reduce noise. Its nonlinearity breaks the degeneracy of the zero eigenvalues and we analyze the sensitivity of the so-emerging spectra to correlations. This sensitivity will be demonstrated for uncorrelated and correlated Wishart ensembles.

  8. Emerging spectra of singular correlation matrices under small power-map deformations.

    PubMed

    Vinayak; Schäfer, Rudi; Seligman, Thomas H

    2013-09-01

    Correlation matrices are a standard tool in the analysis of the time evolution of complex systems in general and financial markets in particular. Yet most analysis assume stationarity of the underlying time series. This tends to be an assumption of varying and often dubious validity. The validity of the assumption improves as shorter time series are used. If many time series are used, this implies an analysis of highly singular correlation matrices. We attack this problem by using the so-called power map, which was introduced to reduce noise. Its nonlinearity breaks the degeneracy of the zero eigenvalues and we analyze the sensitivity of the so-emerging spectra to correlations. This sensitivity will be demonstrated for uncorrelated and correlated Wishart ensembles.

  9. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  10. Structural synthesis: Precursor and catalyst

    NASA Technical Reports Server (NTRS)

    Schmit, L. A.

    1984-01-01

    More than twenty five years have elapsed since it was recognized that a rather general class of structural design optimization tasks could be properly posed as an inequality constrained minimization problem. It is suggested that, independent of primary discipline area, it will be useful to think about: (1) posing design problems in terms of an objective function and inequality constraints; (2) generating design oriented approximate analysis methods (giving special attention to behavior sensitivity analysis); (3) distinguishing between decisions that lead to an analysis model and those that lead to a design model; (4) finding ways to generate a sequence of approximate design optimization problems that capture the essential characteristics of the primary problem, while still having an explicit algebraic form that is matched to one or more of the established optimization algorithms; (5) examining the potential of optimum design sensitivity analysis to facilitate quantitative trade-off studies as well as participation in multilevel design activities. It should be kept in mind that multilevel methods are inherently well suited to a parallel mode of operation in computer terms or to a division of labor between task groups in organizational terms. Based on structural experience with multilevel methods general guidelines are suggested.

  11. Improving the analysis of slug tests

    USGS Publications Warehouse

    McElwee, C.D.

    2002-01-01

    This paper examines several techniques that have the potential to improve the quality of slug test analysis. These techniques are applicable in the range from low hydraulic conductivities with overdamped responses to high hydraulic conductivities with nonlinear oscillatory responses. Four techniques for improving slug test analysis will be discussed: use of an extended capability nonlinear model, sensitivity analysis, correction for acceleration and velocity effects, and use of multiple slug tests. The four-parameter nonlinear slug test model used in this work is shown to allow accurate analysis of slug tests with widely differing character. The parameter ?? represents a correction to the water column length caused primarily by radius variations in the wellbore and is most useful in matching the oscillation frequency and amplitude. The water column velocity at slug initiation (V0) is an additional model parameter, which would ideally be zero but may not be due to the initiation mechanism. The remaining two model parameters are A (parameter for nonlinear effects) and K (hydraulic conductivity). Sensitivity analysis shows that in general ?? and V0 have the lowest sensitivity and K usually has the highest. However, for very high K values the sensitivity to A may surpass the sensitivity to K. Oscillatory slug tests involve higher accelerations and velocities of the water column; thus, the pressure transducer responses are affected by these factors and the model response must be corrected to allow maximum accuracy for the analysis. The performance of multiple slug tests will allow some statistical measure of the experimental accuracy and of the reliability of the resulting aquifer parameters. ?? 2002 Elsevier Science B.V. All rights reserved.

  12. DGSA: A Matlab toolbox for distance-based generalized sensitivity analysis of geoscientific computer experiments

    NASA Astrophysics Data System (ADS)

    Park, Jihoon; Yang, Guang; Satija, Addy; Scheidt, Céline; Caers, Jef

    2016-12-01

    Sensitivity analysis plays an important role in geoscientific computer experiments, whether for forecasting, data assimilation or model calibration. In this paper we focus on an extension of a method of regionalized sensitivity analysis (RSA) to applications typical in the Earth Sciences. Such applications involve the building of large complex spatial models, the application of computationally extensive forward modeling codes and the integration of heterogeneous sources of model uncertainty. The aim of this paper is to be practical: 1) provide a Matlab code, 2) provide novel visualization methods to aid users in getting a better understanding in the sensitivity 3) provide a method based on kernel principal component analysis (KPCA) and self-organizing maps (SOM) to account for spatial uncertainty typical in Earth Science applications and 4) provide an illustration on a real field case where the above mentioned complexities present themselves. We present methods that extend the original RSA method in several ways. First we present the calculation of conditional effects, defined as the sensitivity of a parameter given a level of another parameters. Second, we show how this conditional effect can be used to choose nominal values or ranges to fix insensitive parameters aiming to minimally affect uncertainty in the response. Third, we develop a method based on KPCA and SOM to assign a rank to spatial models in order to calculate the sensitivity on spatial variability in the models. A large oil/gas reservoir case is used as illustration of these ideas.

  13. Monolayer Graphene Bolometer as a Sensitive Far-IR Detector

    NASA Technical Reports Server (NTRS)

    Karasik, Boris S.; McKitterick, Christopher B.; Prober, Daniel E.

    2014-01-01

    In this paper we give a detailed analysis of the expected sensitivity and operating conditions in the power detection mode of a hot-electron bolometer (HEB) made from a few micro m(sup 2) of monolayer graphene (MLG) flake which can be embedded into either a planar antenna or waveguide circuit via NbN (or NbTiN) superconducting contacts with critical temperature approx. 14 K. Recent data on the strength of the electron-phonon coupling are used in the present analysis and the contribution of the readout noise to the Noise Equivalent Power (NEP) is explicitly computed. The readout scheme utilizes Johnson Noise Thermometry (JNT) allowing for Frequency-Domain Multiplexing (FDM) using narrowband filter coupling of the HEBs. In general, the filter bandwidth and the summing amplifier noise have a significant effect on the overall system sensitivity.

  14. Analysis of Publically Available Skin Sensitization Data from REACH Registrations 2008–2014

    PubMed Central

    Luechtefeld, Thomas; Maertens, Alexandra; Russo, Daniel P.; Rovida, Costanza; Zhu, Hao; Hartung, Thomas

    2017-01-01

    Summary The public data on skin sensitization from REACH registrations already included 19,111 studies on skin sensitization in December 2014, making it the largest repository of such data so far (1,470 substances with mouse LLNA, 2,787 with GPMT, 762 with both in vivo and in vitro and 139 with only in vitro data). 21% were classified as sensitizers. The extracted skin sensitization data was analyzed to identify relationships in skin sensitization guidelines, visualize structural relationships of sensitizers, and build models to predict sensitization. A chemical with molecular weight > 500 Da is generally considered non-sensitizing owing to low bioavailability, but 49 sensitizing chemicals with a molecular weight > 500 Da were found. A chemical similarity map was produced using PubChem’s 2D Tanimoto similarity metric and Gephi force layout visualization. Nine clusters of chemicals were identified by Blondel’s module recognition algorithm revealing wide module-dependent variation. Approximately 31% of mapped chemicals are Michael’s acceptors but alone this does not imply skin sensitization. A simple sensitization model using molecular weight and five ToxTree structural alerts showed a balanced accuracy of 65.8% (specificity 80.4%, sensitivity 51.4%), demonstrating that structural alerts have information value. A simple variant of k-nearest neighbors outperformed the ToxTree approach even at 75% similarity threshold (82% balanced accuracy at 0.95 threshold). At higher thresholds, the balanced accuracy increased. Lower similarity thresholds decrease sensitivity faster than specificity. This analysis scopes the landscape of chemical skin sensitization, demonstrating the value of large public datasets for health hazard prediction. PMID:26863411

  15. Systematic review and meta-analysis of the performance of clinical risk assessment instruments for screening for osteoporosis or low bone density

    PubMed Central

    Edwards, D. L.; Saleh, A. A.; Greenspan, S. L.

    2015-01-01

    Summary We performed a systematic review and meta-analysis of the performance of clinical risk assessment instruments for screening for DXA-determined osteoporosis or low bone density. Commonly evaluated risk instruments showed high sensitivity approaching or exceeding 90 % at particular thresholds within various populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. Introduction The purpose of the study is to systematically review the performance of clinical risk assessment instruments for screening for dual-energy X-ray absorptiometry (DXA)-determined osteoporosis or low bone density. Methods Systematic review and meta-analysis were performed. Multiple literature sources were searched, and data extracted and analyzed from included references. Results One hundred eight references met inclusion criteria. Studies assessed many instruments in 34 countries, most commonly the Osteoporosis Self-Assessment Tool (OST), the Simple Calculated Osteoporosis Risk Estimation (SCORE) instrument, the Osteoporosis Self-Assessment Tool for Asians (OSTA), the Osteoporosis Risk Assessment Instrument (ORAI), and body weight criteria. Meta-analyses of studies evaluating OST using a cutoff threshold of <1 to identify US postmenopausal women with osteoporosis at the femoral neck provided summary sensitivity and specificity estimates of 89 % (95%CI 82–96 %) and 41 % (95%CI 23–59 %), respectively. Meta-analyses of studies evaluating OST using a cutoff threshold of 3 to identify US men with osteoporosis at the femoral neck, total hip, or lumbar spine provided summary sensitivity and specificity estimates of 88 % (95%CI 79–97 %) and 55 % (95%CI 42–68 %), respectively. Frequently evaluated instruments each had thresholds and populations for which sensitivity for osteoporosis or low bone mass detection approached or exceeded 90 % but always with a trade-off of relatively low specificity. Conclusions Commonly evaluated clinical risk assessment instruments each showed high sensitivity approaching or exceeding 90 % for identifying individuals with DXA-determined osteoporosis or low BMD at certain thresholds in different populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. PMID:25644147

  16. Global Sensitivity Analysis for Process Identification under Model Uncertainty

    NASA Astrophysics Data System (ADS)

    Ye, M.; Dai, H.; Walker, A. P.; Shi, L.; Yang, J.

    2015-12-01

    The environmental system consists of various physical, chemical, and biological processes, and environmental models are always built to simulate these processes and their interactions. For model building, improvement, and validation, it is necessary to identify important processes so that limited resources can be used to better characterize the processes. While global sensitivity analysis has been widely used to identify important processes, the process identification is always based on deterministic process conceptualization that uses a single model for representing a process. However, environmental systems are complex, and it happens often that a single process may be simulated by multiple alternative models. Ignoring the model uncertainty in process identification may lead to biased identification in that identified important processes may not be so in the real world. This study addresses this problem by developing a new method of global sensitivity analysis for process identification. The new method is based on the concept of Sobol sensitivity analysis and model averaging. Similar to the Sobol sensitivity analysis to identify important parameters, our new method evaluates variance change when a process is fixed at its different conceptualizations. The variance considers both parametric and model uncertainty using the method of model averaging. The method is demonstrated using a synthetic study of groundwater modeling that considers recharge process and parameterization process. Each process has two alternative models. Important processes of groundwater flow and transport are evaluated using our new method. The method is mathematically general, and can be applied to a wide range of environmental problems.

  17. Sensitivity analysis to assess the influence of the inertial properties of railway vehicle bodies on the vehicle's dynamic behaviour

    NASA Astrophysics Data System (ADS)

    Suarez, Berta; Felez, Jesus; Maroto, Joaquin; Rodriguez, Pablo

    2013-02-01

    A sensitivity analysis has been performed to assess the influence of the inertial properties of railway vehicles on their dynamic behaviour. To do this, 216 dynamic simulations were performed modifying, one at a time, the masses, moments of inertia and heights of the centre of gravity of the carbody, the bogie and the wheelset. Three values were assigned to each parameter, corresponding to the percentiles 10, 50 and 90 of a data set stored in a database of railway vehicles. After processing the results of these simulations, the analysed parameters were sorted by increasing influence. It was also found which of these parameters could be estimated with a lesser degree of accuracy for future simulations without appreciably affecting the simulation results. In general terms, it was concluded that the most sensitive inertial properties are the mass and the vertical moment of inertia, and the least sensitive ones the longitudinal and lateral moments of inertia.

  18. MRI correlates of general intelligence in neurotypical adults.

    PubMed

    Malpas, Charles B; Genc, Sila; Saling, Michael M; Velakoulis, Dennis; Desmond, Patricia M; O'Brien, Terence J

    2016-02-01

    There is growing interest in the neurobiological substrate of general intelligence. Psychometric estimates of general intelligence are reduced in a range of neurological disorders, leading to practical application as sensitive, but non-specific, markers of cerebral disorder. This study examined estimates of general intelligence in neurotypical adults using diffusion tensor imaging and resting-state functional connectivity analysis. General intelligence was related to white matter organisation across multiple brain regions, confirming previous work in older healthy adults. We also found that variation in general intelligence was related to a large functional sub-network involving all cortical lobes of the brain. These findings confirm that individual variance in general intelligence is related to diffusely represented brain networks. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Integrating aerodynamics and structures in the minimum weight design of a supersonic transport wing

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M.; Wrenn, Gregory A.; Dovi, Augustine R.; Coen, Peter G.; Hall, Laura E.

    1992-01-01

    An approach is presented for determining the minimum weight design of aircraft wing models which takes into consideration aerodynamics-structure coupling when calculating both zeroth order information needed for analysis and first order information needed for optimization. When performing sensitivity analysis, coupling is accounted for by using a generalized sensitivity formulation. The results presented show that the aeroelastic effects are calculated properly and noticeably reduce constraint approximation errors. However, for the particular example selected, the error introduced by ignoring aeroelastic effects are not sufficient to significantly affect the convergence of the optimization process. Trade studies are reported that consider different structural materials, internal spar layouts, and panel buckling lengths. For the formulation, model and materials used in this study, an advanced aluminum material produced the lightest design while satisfying the problem constraints. Also, shorter panel buckling lengths resulted in lower weights by permitting smaller panel thicknesses and generally, by unloading the wing skins and loading the spar caps. Finally, straight spars required slightly lower wing weights than angled spars.

  20. Systematic Review and Meta-Analysis of Studies Evaluating Diagnostic Test Accuracy: A Practical Review for Clinical Researchers-Part II. Statistical Methods of Meta-Analysis

    PubMed Central

    Lee, Juneyoung; Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi

    2015-01-01

    Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies. PMID:26576107

  1. On determining important aspects of mathematical models: Application to problems in physics and chemistry

    NASA Technical Reports Server (NTRS)

    Rabitz, Herschel

    1987-01-01

    The use of parametric and functional gradient sensitivity analysis techniques is considered for models described by partial differential equations. By interchanging appropriate dependent and independent variables, questions of inverse sensitivity may be addressed to gain insight into the inversion of observational data for parameter and function identification in mathematical models. It may be argued that the presence of a subset of dominantly strong coupled dependent variables will result in the overall system sensitivity behavior collapsing into a simple set of scaling and self similarity relations amongst elements of the entire matrix of sensitivity coefficients. These general tools are generic in nature, but herein their application to problems arising in selected areas of physics and chemistry is presented.

  2. Pain sensitivity mediates the relationship between stress and headache intensity in chronic tension-type headache.

    PubMed

    Cathcart, Stuart; Bhullar, Navjot; Immink, Maarten; Della Vedova, Chris; Hayball, John

    2012-01-01

    A central model for chronic tension-type headache (CTH) posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. The prediction from this model that pain sensitivity mediates the relationship between stress and headache activity has not yet been examined. To determine whether pain sensitivity mediates the relationship between stress and prospective headache activity in CTH sufferers. Self-reported stress, pain sensitivity and prospective headache activity were measured in 53 CTH sufferers recruited from the general population. Pain sensitivity was modelled as a mediator between stress and headache activity, and tested using a nonparametric bootstrap analysis. Pain sensitivity significantly mediated the relationship between stress and headache intensity. The results of the present study support the central model for CTH, which posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. Implications for the mechanisms and treatment of CTH are discussed.

  3. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  4. Receiver operating characteristic analysis of age-related changes in lineup performance.

    PubMed

    Humphries, Joyce E; Flowe, Heather D

    2015-04-01

    In the basic face memory literature, support has been found for the late maturation hypothesis, which holds that face recognition ability is not fully developed until at least adolescence. Support for the late maturation hypothesis in the criminal lineup identification literature, however, has been equivocal because of the analytic approach that has been used to examine age-related changes in identification performance. Recently, receiver operator characteristic (ROC) analysis was applied for the first time in the adult eyewitness memory literature to examine whether memory sensitivity differs across different types of lineup tests. ROC analysis allows for the separation of memory sensitivity from response bias in the analysis of recognition data. Here, we have made the first ROC-based comparison of adults' and children's (5- and 6-year-olds and 9- and 10-year-olds) memory performance on lineups by reanalyzing data from Humphries, Holliday, and Flowe (2012). In line with the late maturation hypothesis, memory sensitivity was significantly greater for adults compared with young children. Memory sensitivity for older children was similar to that for adults. The results indicate that the late maturation hypothesis can be generalized to account for age-related performance differences on an eyewitness memory task. The implications for developmental eyewitness memory research are discussed. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Biological Diversity Research: An Analysis

    Treesearch

    James W. McMinn

    1991-01-01

    An appropriate yardstick for a biodiversity program is how it affects the persistence of viable populations. A coordinated program of biodiversity research could be structured under three overlapping subject areas: (1) threatened, endangered, and sensitive species; (2) restoration of missing, underrepresented, or declining communities; and (3) general principles and...

  6. Affect in the "Communicative" Classroom: A Model.

    ERIC Educational Resources Information Center

    Acton, William

    Recent research on affective variables and classroom second language learning suggests that: (1) affective variables are context-sensitive in at least two ways; (2) attitudes are contagious, and the general attitude of students can be influenced from various directions; (3) research in pragmatics, discourse analysis, and communicative functions…

  7. Primal-dual methods of shape sensitivity analysis for curvilinear cracks with nonpenetration

    NASA Astrophysics Data System (ADS)

    Kovtunenko, V. A.

    2006-10-01

    Based on a level-set description of a crack moving with a given velocity, the problem of shape perturb-ation of the crack is considered. Nonpenetration conditions are imposed between opposite crack surfaces which result in a constrained minimization problem describing equilibrium of a solid with the crack. We suggest a minimax formulation of the state problem thus allowing curvilinear (nonplanar) cracks for the consideration. Utilizing primal-dual methods of shape sensitivity analysis we obtain the general formula for a shape derivative of the potential energy, which describes an energy-release rate for the curvilinear cracks. The conditions sufficient to rewrite it in the form of a path-independent integral (J-integral) are derived.

  8. Application of sensitivity-analysis techniques to the calculation of topological quantities

    NASA Astrophysics Data System (ADS)

    Gilchrist, Stuart

    2017-08-01

    Magnetic reconnection in the corona occurs preferentially at sites where the magnetic connectivity is either discontinuous or has a large spatial gradient. Hence there is a general interest in computing quantities (like the squashing factor) that characterize the gradient in the field-line mapping function. Here we present an algorithm for calculating certain (quasi)topological quantities using mathematical techniques from the field of ``sensitivity-analysis''. The method is based on the calculation of a three dimensional field-line mapping Jacobian from which all the present topological quantities of interest can be derived. We will present the algorithm and the details of a publicly available set of libraries that implement the algorithm.

  9. Static and dynamic structural-sensitivity derivative calculations in the finite-element-based Engineering Analysis Language (EAL) system

    NASA Technical Reports Server (NTRS)

    Camarda, C. J.; Adelman, H. M.

    1984-01-01

    The implementation of static and dynamic structural-sensitivity derivative calculations in a general purpose, finite-element computer program denoted the Engineering Analysis Language (EAL) System is described. Derivatives are calculated with respect to structural parameters, specifically, member sectional properties including thicknesses, cross-sectional areas, and moments of inertia. Derivatives are obtained for displacements, stresses, vibration frequencies and mode shapes, and buckling loads and mode shapes. Three methods for calculating derivatives are implemented (analytical, semianalytical, and finite differences), and comparisons of computer time and accuracy are made. Results are presented for four examples: a swept wing, a box beam, a stiffened cylinder with a cutout, and a space radiometer-antenna truss.

  10. Loop transfer recovery for general nonminimum phase discrete time systems. I - Analysis

    NASA Technical Reports Server (NTRS)

    Chen, Ben M.; Saberi, Ali; Sannuti, Peddapullaiah; Shamash, Yacov

    1992-01-01

    A complete analysis of loop transfer recovery (LTR) for general nonstrictly proper, not necessarily minimum phase discrete time systems is presented. Three different observer-based controllers, namely, `prediction estimator' and full or reduced-order type `current estimator' based controllers, are used. The analysis corresponding to all these three controllers is unified into a single mathematical framework. The LTR analysis given here focuses on three fundamental issues: (1) the recoverability of a target loop when it is arbitrarily given, (2) the recoverability of a target loop while taking into account its specific characteristics, and (3) the establishment of necessary and sufficient conditions on the given system so that it has at least one recoverable target loop transfer function or sensitivity function. Various differences that arise in LTR analysis of continuous and discrete systems are pointed out.

  11. Modeling Nitrogen Dynamics in a Waste Stabilization Pond System Using Flexible Modeling Environment with MCMC.

    PubMed

    Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V; Petway, Joy R

    2017-07-12

    This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH₃-N and NO₃-N. Results indicate that the integrated FME-GLUE-based model, with good Nash-Sutcliffe coefficients (0.53-0.69) and correlation coefficients (0.76-0.83), successfully simulates the concentrations of ON-N, NH₃-N and NO₃-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH₃-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO₃-N simulation, which was measured using global sensitivity.

  12. Modeling and Analysis of a Combined Stress-Vibration Fiber Bragg Grating Sensor

    PubMed Central

    Yao, Kun; Lin, Qijing; Jiang, Zhuangde; Zhao, Na; Tian, Bian; Shi, Peng; Peng, Gang-Ding

    2018-01-01

    A combined stress-vibration sensor was developed to measure stress and vibration simultaneously based on fiber Bragg grating (FBG) technology. The sensor is composed of two FBGs and a stainless steel plate with a special design. The two FBGs sense vibration and stress and the sensor can realize temperature compensation by itself. The stainless steel plate can significantly increase sensitivity of vibration measurement. Theoretical analysis and Finite Element Method (FEM) were used to analyze the sensor’s working mechanism. As demonstrated with analysis, the obtained sensor has working range of 0–6000 Hz for vibration sensing and 0–100 MPa for stress sensing, respectively. The corresponding sensitivity for vibration is 0.46 pm/g and the resulted stress sensitivity is 5.94 pm/MPa, while the nonlinearity error for vibration and stress measurement is 0.77% and 1.02%, respectively. Compared to general FBGs, the vibration sensitivity of this sensor is 26.2 times higher. Therefore, the developed sensor can be used to concurrently detect vibration and stress. As this sensor has height of 1 mm and weight of 1.15 g, it is beneficial for minimization and integration. PMID:29494544

  13. Modeling and Analysis of a Combined Stress-Vibration Fiber Bragg Grating Sensor.

    PubMed

    Yao, Kun; Lin, Qijing; Jiang, Zhuangde; Zhao, Na; Tian, Bian; Shi, Peng; Peng, Gang-Ding

    2018-03-01

    A combined stress-vibration sensor was developed to measure stress and vibration simultaneously based on fiber Bragg grating (FBG) technology. The sensor is composed of two FBGs and a stainless steel plate with a special design. The two FBGs sense vibration and stress and the sensor can realize temperature compensation by itself. The stainless steel plate can significantly increase sensitivity of vibration measurement. Theoretical analysis and Finite Element Method (FEM) were used to analyze the sensor's working mechanism. As demonstrated with analysis, the obtained sensor has working range of 0-6000 Hz for vibration sensing and 0-100 MPa for stress sensing, respectively. The corresponding sensitivity for vibration is 0.46 pm/g and the resulted stress sensitivity is 5.94 pm/MPa, while the nonlinearity error for vibration and stress measurement is 0.77% and 1.02%, respectively. Compared to general FBGs, the vibration sensitivity of this sensor is 26.2 times higher. Therefore, the developed sensor can be used to concurrently detect vibration and stress. As this sensor has height of 1 mm and weight of 1.15 g, it is beneficial for minimization and integration.

  14. VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.

    2016-12-01

    VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.

  15. Sensitivity analysis for axis rotation diagrid structural systems according to brace angle changes

    NASA Astrophysics Data System (ADS)

    Yang, Jae-Kwang; Li, Long-Yang; Park, Sung-Soo

    2017-10-01

    General regular shaped diagrid structures can express diverse shapes because braces are installed along the exterior faces of the structures and the structures have no columns. However, since irregular shaped structures have diverse variables, studies to assess behaviors resulting from various variables are continuously required to supplement the imperfections related to such variables. In the present study, materials elastic modulus and yield strength were selected as variables for strength that would be applied to diagrid structural systems in the form of Twisters among the irregular shaped buildings classified by Vollers and that affect the structural design of these structural systems. The purpose of this study is to conduct sensitivity analysis for axial rotation diagrid structural systems according to changes in brace angles in order to identify the design variables that have relatively larger effects and the tendencies of the sensitivity of the structures according to changes in brace angles and axial rotation angles.

  16. Make or buy decision model with multi-stage manufacturing process and supplier imperfect quality

    NASA Astrophysics Data System (ADS)

    Pratama, Mega Aria; Rosyidi, Cucuk Nur

    2017-11-01

    This research develops an make or buy decision model considering supplier imperfect quality. This model can be used to help companies make the right decision in case of make or buy component with the best quality and the least cost in multistage manufacturing process. The imperfect quality is one of the cost component that must be minimizing in this model. Component with imperfect quality, not necessarily defective. It still can be rework and used for assembly. This research also provide a numerical example and sensitivity analysis to show how the model work. We use simulation and help by crystal ball to solve the numerical problem. The sensitivity analysis result show that percentage of imperfect generally not affect to the model significantly, and the model is not sensitive to changes in these parameters. This is because the imperfect cost are smaller than overall total cost components.

  17. Geometrical analysis of an optical fiber bundle displacement sensor

    NASA Astrophysics Data System (ADS)

    Shimamoto, Atsushi; Tanaka, Kohichi

    1996-12-01

    The performance of a multifiber optical lever was geometrically analyzed by extending the Cook and Hamm model [Appl. Opt. 34, 5854-5860 (1995)] for a basic seven-fiber optical lever. The generalized relationships between sensitivity and the displacement detection limit to the fiber core radius, illumination irradiance, and coupling angle were obtained by analyses of three various types of light source, i.e., a parallel beam light source, an infinite plane light source, and a point light source. The analysis of the point light source was confirmed by a measurement that used the light source of a light-emitting diode. The sensitivity of the fiber-optic lever is inversely proportional to the fiber core radius, whereas the receiving light power is proportional to the number of illuminating and receiving fibers. Thus, the bundling of the finer fiber with the larger number of illuminating and receiving fibers is more effective for improving sensitivity and the displacement detection limit.

  18. Longitudinal study of factors affecting taste sense decline in old-old individuals.

    PubMed

    Ogawa, T; Uota, M; Ikebe, K; Arai, Y; Kamide, K; Gondo, Y; Masui, Y; Ishizaki, T; Inomata, C; Takeshita, H; Mihara, Y; Hatta, K; Maeda, Y

    2017-01-01

    The sense of taste plays a pivotal role for personal assessment of the nutritional value, safety and quality of foods. Although it is commonly recognised that taste sensitivity decreases with age, alterations in that sensitivity over time in an old-old population have not been previously reported. Furthermore, no known studies utilised comprehensive variables regarding taste changes and related factors for assessments. Here, we report novel findings from a 3-year longitudinal study model aimed to elucidate taste sensitivity decline and its related factors in old-old individuals. We utilised 621 subjects aged 79-81 years who participated in the Septuagenarians, Octogenarians, Nonagenarians Investigation with Centenarians Study for baseline assessments performed in 2011 and 2012, and then conducted follow-up assessments 3 years later in 328 of those. Assessment of general health, an oral examination and determination of taste sensitivity were performed for each. We also evaluated cognitive function using Montreal Cognitive Assessment findings, then excluded from analysis those with a score lower than 20 in order to secure the validity and reliability of the subjects' answers. Contributing variables were selected using univariate analysis, then analysed with multivariate logistic regression analysis. We found that males showed significantly greater declines in taste sensitivity for sweet and sour tastes than females. Additionally, subjects with lower cognitive scores showed a significantly greater taste decrease for salty in multivariate analysis. In conclusion, our longitudinal study revealed that gender and cognitive status are major factors affecting taste sensitivity in geriatric individuals. © 2016 John Wiley & Sons Ltd.

  19. The ultimate efficiency of photosensitive systems

    NASA Technical Reports Server (NTRS)

    Buoncristiani, A. M.; Byvik, C. E.; Smith, B. T.

    1981-01-01

    These systems have in common two important but not independent features: they can produce a storable fuel, and they are sensitive only to radiant energy with a characteristic absorption spectrum. General analyses of the conversion efficiencies were made using the operational characteristics of each particular system. An efficiency analysis of a generalized system consisting of a blackbody source, a radiant energy converter having a threshold energy and operating temperature, and a reservoir is reported. This analysis is based upon the first and second laws of thermodynamics, and leads to a determination of the limiting or ultimate efficiency for an energy conversion system having a characteristic threshold.

  20. Shot noise-limited Cramér-Rao bound and algorithmic sensitivity for wavelength shifting interferometry

    NASA Astrophysics Data System (ADS)

    Chen, Shichao; Zhu, Yizheng

    2017-02-01

    Sensitivity is a critical index to measure the temporal fluctuation of the retrieved optical pathlength in quantitative phase imaging system. However, an accurate and comprehensive analysis for sensitivity evaluation is still lacking in current literature. In particular, previous theoretical studies for fundamental sensitivity based on Gaussian noise models are not applicable to modern cameras and detectors, which are dominated by shot noise. In this paper, we derive two shot noiselimited theoretical sensitivities, Cramér-Rao bound and algorithmic sensitivity for wavelength shifting interferometry, which is a major category of on-axis interferometry techniques in quantitative phase imaging. Based on the derivations, we show that the shot noise-limited model permits accurate estimation of theoretical sensitivities directly from measured data. These results can provide important insights into fundamental constraints in system performance and can be used to guide system design and optimization. The same concepts can be generalized to other quantitative phase imaging techniques as well.

  1. Analysis of consumers' preferences and behavior with regard to horse meat using a structured survey questionnaire.

    PubMed

    Oh, Woon Yong; Lee, Ji Woong; Lee, Chong Eon; Ko, Moon Seok; Jeong, Jae Hong

    2009-12-01

    In this study, a structured survey questionnaire was used to determine consumers' preferences and behavior with regard to horse meat at a horse meat restaurant located in Jeju, Korea, from October 1 to December 24, 2005. The questionnaire employed in this study consisted of 20 questions designed to characterize six general attributes: horse meat sensory property, physical appearance, health condition, origin, price, and other attributes. Of the 1370 questionnaires distributed, 1126 completed questionnaires were retained based on the completeness of the answers, representing an 82.2% response rate. Two issues were investigated that might facilitate the search for ways to improve horse meat production and marketing programs in Korea. The first step was to determine certain important factors, called principal components, which enabled the researchers to understand the needs of horse meat consumers via principal component analysis. The second step was to define consumer segments with regard to their preferences for horse meat, which was accomplished via cluster analysis. The results of the current study showed that health condition, price, origin, and leanness were the most critical physical attributes affecting the preferences of horse meat consumers. Four segments of consumers, with different demands for horse meat attributes, were identified: origin-sensitive consumers, price-sensitive consumers, quality and safety-sensitive consumers, and non-specific consumers. Significant differences existed among segments of consumers in terms of age, nature of work, frequency of consumption, and general level of acceptability of horse meat.

  2. Modeling cost of ultrasound versus nerve stimulator guidance for nerve blocks with sensitivity analysis.

    PubMed

    Liu, Spencer S; John, Raymond S

    2010-01-01

    Ultrasound guidance for regional anesthesia has increased in popularity. However, the cost of ultrasound versus nerve stimulator guidance is controversial, as multiple and varying cost inputs are involved. Sensitivity analysis allows modeling of different scenarios and determination of the relative importance of each cost input for a given scenario. We modeled cost per patient of ultrasound versus nerve stimulator using single-factor sensitivity analysis for 4 different clinical scenarios designed to span the expected financial impact of ultrasound guidance. The primary cost factors for ultrasound were revenue from billing for ultrasound (85% of variation in final cost), number of patients examined per ultrasound machine (10%), and block success rate (2.6%). In contrast, the most important input factors for nerve stimulator were the success rate of the nerve stimulator block (89%) and the amount of liability payout for failed airway due to rescue general anesthesia (9%). Depending on clinical scenario, ultrasound was either a profit or cost center. If revenue is generated, then ultrasound-guided blocks consistently become a profit center regardless of clinical scenario in our model. Without revenue, the clinical scenario dictates the cost of ultrasound. In an ambulatory setting, ultrasound is highly competitive with nerve stimulator and requires at least a 96% success rate with nerve stimulator before becoming more expensive. In a hospitalized scenario, ultrasound is consistently more expensive as the uniform use of general anesthesia and hospitalization negate any positive cost effects from greater efficiency with ultrasound.

  3. The structure of paranoia in the general population.

    PubMed

    Bebbington, Paul E; McBride, Orla; Steel, Craig; Kuipers, Elizabeth; Radovanovic, Mirjana; Brugha, Traolach; Jenkins, Rachel; Meltzer, Howard I; Freeman, Daniel

    2013-06-01

    Psychotic phenomena appear to form a continuum with normal experience and beliefs, and may build on common emotional interpersonal concerns. We tested predictions that paranoid ideation is exponentially distributed and hierarchically arranged in the general population, and that persecutory ideas build on more common cognitions of mistrust, interpersonal sensitivity and ideas of reference. Items were chosen from the Structured Clinical Interview for DSM-IV Axis II Disorders (SCID-II) questionnaire and the Psychosis Screening Questionnaire in the second British National Survey of Psychiatric Morbidity (n = 8580), to test a putative hierarchy of paranoid development using confirmatory factor analysis, latent class analysis and factor mixture modelling analysis. Different types of paranoid ideation ranged in frequency from less than 2% to nearly 30%. Total scores on these items followed an almost perfect exponential distribution (r = 0.99). Our four a priori first-order factors were corroborated (interpersonal sensitivity; mistrust; ideas of reference; ideas of persecution). These mapped onto four classes of individual respondents: a rare, severe, persecutory class with high endorsement of all item factors, including persecutory ideation; a quasi-normal class with infrequent endorsement of interpersonal sensitivity, mistrust and ideas of reference, and no ideas of persecution; and two intermediate classes, characterised respectively by relatively high endorsement of items relating to mistrust and to ideas of reference. The paranoia continuum has implications for the aetiology, mechanisms and treatment of psychotic disorders, while confirming the lack of a clear distinction from normal experiences and processes.

  4. Automatic network coupling analysis for dynamical systems based on detailed kinetic models.

    PubMed

    Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich

    2005-10-01

    We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.

  5. New Method for Analysis of Multiple Anthelmintic Residues in Animal Tissue

    USDA-ARS?s Scientific Manuscript database

    For the first time, 39 of the major anthelmintics can be detected in one rapid and sensitive LC-MS/MS method, including the flukicides, which have been generally overlooked in surveillance programs. Utilizing the QuEChERS approach, residues were extracted from liver and milk using acetonitrile, sod...

  6. LLNA variability: An essential ingredient for a comprehensive assessment of non-animal skin sensitization test methods and strategies.

    PubMed

    Hoffmann, Sebastian

    2015-01-01

    The development of non-animal skin sensitization test methods and strategies is quickly progressing. Either individually or in combination, the predictive capacity is usually described in comparison to local lymph node assay (LLNA) results. In this process the important lesson from other endpoints, such as skin or eye irritation, to account for variability reference test results - here the LLNA - has not yet been fully acknowledged. In order to provide assessors as well as method and strategy developers with appropriate estimates, we investigated the variability of EC3 values from repeated substance testing using the publicly available NICEATM (NTP Interagency Center for the Evaluation of Alternative Toxicological Methods) LLNA database. Repeat experiments for more than 60 substances were analyzed - once taking the vehicle into account and once combining data over all vehicles. In general, variability was higher when different vehicles were used. In terms of skin sensitization potential, i.e., discriminating sensitizer from non-sensitizers, the false positive rate ranged from 14-20%, while the false negative rate was 4-5%. In terms of skin sensitization potency, the rate to assign a substance to the next higher or next lower potency class was approx.10-15%. In addition, general estimates for EC3 variability are provided that can be used for modelling purposes. With our analysis we stress the importance of considering the LLNA variability in the assessment of skin sensitization test methods and strategies and provide estimates thereof.

  7. Modeling Canadian Quality Control Test Program for Steroid Hormone Receptors in Breast Cancer: Diagnostic Accuracy Study.

    PubMed

    Pérez, Teresa; Makrestsov, Nikita; Garatt, John; Torlakovic, Emina; Gilks, C Blake; Mallett, Susan

    The Canadian Immunohistochemistry Quality Control program monitors clinical laboratory performance for estrogen receptor and progesterone receptor tests used in breast cancer treatment management in Canada. Current methods assess sensitivity and specificity at each time point, compared with a reference standard. We investigate alternative performance analysis methods to enhance the quality assessment. We used 3 methods of analysis: meta-analysis of sensitivity and specificity of each laboratory across all time points; sensitivity and specificity at each time point for each laboratory; and fitting models for repeated measurements to examine differences between laboratories adjusted by test and time point. Results show 88 laboratories participated in quality control at up to 13 time points using typically 37 to 54 histology samples. In meta-analysis across all time points no laboratories have sensitivity or specificity below 80%. Current methods, presenting sensitivity and specificity separately for each run, result in wide 95% confidence intervals, typically spanning 15% to 30%. Models of a single diagnostic outcome demonstrated that 82% to 100% of laboratories had no difference to reference standard for estrogen receptor and 75% to 100% for progesterone receptor, with the exception of 1 progesterone receptor run. Laboratories with significant differences to reference standard identified with Generalized Estimating Equation modeling also have reduced performance by meta-analysis across all time points. The Canadian Immunohistochemistry Quality Control program has a good design, and with this modeling approach has sufficient precision to measure performance at each time point and allow laboratories with a significantly lower performance to be targeted for advice.

  8. User’s Manual for SEEK TALK Full Scale Engineering Development Life Cycle Cost (LCC) Model. Volume II. Model Equations and Model Operations.

    DTIC Science & Technology

    1981-04-01

    LIFE CYCLE COST (LCC) LCC SENSITIVITY ANALYSIS LCC MODE , REPAIR LEVEL ANALYSIS (RLA) 20 ABSTRACT (Cnn tlnue on reverse side It necessary and Identify... level analysis capability. Next it provides values for Air Force input parameters and instructions for contractor inputs, general operating...Maintenance Manhour Requirements 39 5.1.4 Calculation of Repair Level Fractions 43 5.2 Cost Element Equations 47 5.2.1 Production Cost Element 47

  9. Linked Sensitivity Analysis, Calibration, and Uncertainty Analysis Using a System Dynamics Model for Stroke Comparative Effectiveness Research.

    PubMed

    Tian, Yuan; Hassmiller Lich, Kristen; Osgood, Nathaniel D; Eom, Kirsten; Matchar, David B

    2016-11-01

    As health services researchers and decision makers tackle more difficult problems using simulation models, the number of parameters and the corresponding degree of uncertainty have increased. This often results in reduced confidence in such complex models to guide decision making. To demonstrate a systematic approach of linked sensitivity analysis, calibration, and uncertainty analysis to improve confidence in complex models. Four techniques were integrated and applied to a System Dynamics stroke model of US veterans, which was developed to inform systemwide intervention and research planning: Morris method (sensitivity analysis), multistart Powell hill-climbing algorithm and generalized likelihood uncertainty estimation (calibration), and Monte Carlo simulation (uncertainty analysis). Of 60 uncertain parameters, sensitivity analysis identified 29 needing calibration, 7 that did not need calibration but significantly influenced key stroke outcomes, and 24 not influential to calibration or stroke outcomes that were fixed at their best guess values. One thousand alternative well-calibrated baselines were obtained to reflect calibration uncertainty and brought into uncertainty analysis. The initial stroke incidence rate among veterans was identified as the most influential uncertain parameter, for which further data should be collected. That said, accounting for current uncertainty, the analysis of 15 distinct prevention and treatment interventions provided a robust conclusion that hypertension control for all veterans would yield the largest gain in quality-adjusted life years. For complex health care models, a mixed approach was applied to examine the uncertainty surrounding key stroke outcomes and the robustness of conclusions. We demonstrate that this rigorous approach can be practical and advocate for such analysis to promote understanding of the limits of certainty in applying models to current decisions and to guide future data collection. © The Author(s) 2016.

  10. Association between atopic dermatitis and contact sensitization: A systematic review and meta-analysis.

    PubMed

    Hamann, Carsten R; Hamann, Dathan; Egeberg, Alexander; Johansen, Jeanne D; Silverberg, Jonathan; Thyssen, Jacob P

    2017-07-01

    It is unclear whether patients with atopic dermatitis (AD) have an altered prevalence or risk for contact sensitization. Increased exposure to chemicals in topical products together with impaired skin barrier function suggest a higher risk, whereas the immune profile suggests a lower risk. To perform a systematic review and meta-analysis of the association between AD and contact sensitization. The PubMed/Medline, Embase, and Cochrane databases were searched for articles that reported on contact sensitization in individuals with and without AD. The literature search yielded 10,083 citations; 417 were selected based on title and abstract screening and 74 met inclusion criteria. In a pooled analysis, no significant difference in contact sensitization between AD and controls was evident (random effects model odds ratio [OR] = 0.891; 95% confidence interval [CI] = 0.771-1.03). There was a positive correlation in studies that compared AD patients with individuals from the general population (OR 1.50, 95% CI 1.23-1.93) but an inverse association when comparing with referred populations (OR 0.753, 95% CI 0.63-0.90). Included studies used different tools to diagnose AD and did not always provide information on current or past disease. Patch test allergens varied between studies. No overall relationship between AD and contact sensitization was found. We recommend that clinicians consider patch testing AD patients when allergic contact dermatitis is suspected. Copyright © 2017 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  11. A generalized procedure for analyzing sustained and dynamic vocal fold vibrations from laryngeal high-speed videos using phonovibrograms.

    PubMed

    Unger, Jakob; Schuster, Maria; Hecker, Dietmar J; Schick, Bernhard; Lohscheller, Jörg

    2016-01-01

    This work presents a computer-based approach to analyze the two-dimensional vocal fold dynamics of endoscopic high-speed videos, and constitutes an extension and generalization of a previously proposed wavelet-based procedure. While most approaches aim for analyzing sustained phonation conditions, the proposed method allows for a clinically adequate analysis of both dynamic as well as sustained phonation paradigms. The analysis procedure is based on a spatio-temporal visualization technique, the phonovibrogram, that facilitates the documentation of the visible laryngeal dynamics. From the phonovibrogram, a low-dimensional set of features is computed using a principle component analysis strategy that quantifies the type of vibration patterns, irregularity, lateral symmetry and synchronicity, as a function of time. Two different test bench data sets are used to validate the approach: (I) 150 healthy and pathologic subjects examined during sustained phonation. (II) 20 healthy and pathologic subjects that were examined twice: during sustained phonation and a glissando from a low to a higher fundamental frequency. In order to assess the discriminative power of the extracted features, a Support Vector Machine is trained to distinguish between physiologic and pathologic vibrations. The results for sustained phonation sequences are compared to the previous approach. Finally, the classification performance of the stationary analyzing procedure is compared to the transient analysis of the glissando maneuver. For the first test bench the proposed procedure outperformed the previous approach (proposed feature set: accuracy: 91.3%, sensitivity: 80%, specificity: 97%, previous approach: accuracy: 89.3%, sensitivity: 76%, specificity: 96%). Comparing the classification performance of the second test bench further corroborates that analyzing transient paradigms provides clear additional diagnostic value (glissando maneuver: accuracy: 90%, sensitivity: 100%, specificity: 80%, sustained phonation: accuracy: 75%, sensitivity: 80%, specificity: 70%). The incorporation of parameters describing the temporal evolvement of vocal fold vibration clearly improves the automatic identification of pathologic vibration patterns. Furthermore, incorporating a dynamic phonation paradigm provides additional valuable information about the underlying laryngeal dynamics that cannot be derived from sustained conditions. The proposed generalized approach provides a better overall classification performance than the previous approach, and hence constitutes a new advantageous tool for an improved clinical diagnosis of voice disorders. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, J.M.; Callahan, C.A.; Cline, J.F.

    Bioassays were used in a three-phase research project to assess the comparative sensitivity of test organisms to known chemicals, determine if the chemical components in field soil and water samples containing unknown contaminants could be inferred from our laboratory studies using known chemicals, and to investigate kriging (a relatively new statistical mapping technique) and bioassays as methods to define the areal extent of chemical contamination. The algal assay generally was most sensitive to samples of pure chemicals, soil elutriates and water from eight sites with known chemical contamination. Bioassays of nine samples of unknown chemical composition from the Rocky Mountainmore » Arsenal (RMA) site showed that a lettuce seed soil contact phytoassay was most sensitive. In general, our bioassays can be used to broadly identify toxic components of contaminated soil. Nearly pure compounds of insecticides and herbicides were less toxic in the sensitive bioassays than were the counterpart commercial formulations. This finding indicates that chemical analysis alone may fail to correctly rate the severity of environmental toxicity. Finally, we used the lettuce seed phytoassay and kriging techniques in a field study at RMA to demonstrate the feasibility of mapping contamination to aid in cleanup decisions. 25 references, 9 figures, 9 tables.« less

  13. Investigating relativity using lunar laser ranging - Geodetic precession and the Nordtvedt effect

    NASA Technical Reports Server (NTRS)

    Dickey, J. O.; Newhall, X. X.; Williams, J. G.

    1989-01-01

    The emplacement of retroreflectors on the moon by Apollo astronauts and the Russian Lunakhod spacecraft marked the inception of lunar laser ranging (LLR) and provided a natural laboratory for the study of general relativity. Continuing acquisition of increasingly accurate LLR data has provided enhanced sensitivity to general relativity parameters. Two relativistic effects are investigated in this paper: (1) the Nordtvedt effect, yielding a test of the strong equivalence principle, would appear as a distortion of the geocentric lunar orbit in the direction of the sun. The inclusion of recent LLR data limits the size of any such effect to 3 + or - 4 cm. The sensitivities to the various PPN quantities are also highlighted. (2) the geodetic precession of the lunar perigee is predicted by general relativity as a consequence of the motion of the earth-moon system about the sun; its theoretical magnitude is 19.2 mas/yr. Analysis presented here confirms this value and determines this quality to a 2 percent level.

  14. Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation

    NASA Astrophysics Data System (ADS)

    Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter

    2015-04-01

    Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.

  15. Probabilistic boundary element method

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Raveendra, S. T.

    1989-01-01

    The purpose of the Probabilistic Structural Analysis Method (PSAM) project is to develop structural analysis capabilities for the design analysis of advanced space propulsion system hardware. The boundary element method (BEM) is used as the basis of the Probabilistic Advanced Analysis Methods (PADAM) which is discussed. The probabilistic BEM code (PBEM) is used to obtain the structural response and sensitivity results to a set of random variables. As such, PBEM performs analogous to other structural analysis codes such as finite elements in the PSAM system. For linear problems, unlike the finite element method (FEM), the BEM governing equations are written at the boundary of the body only, thus, the method eliminates the need to model the volume of the body. However, for general body force problems, a direct condensation of the governing equations to the boundary of the body is not possible and therefore volume modeling is generally required.

  16. Ellipsoidal analysis of coordination polyhedra

    PubMed Central

    Cumby, James; Attfield, J. Paul

    2017-01-01

    The idea of the coordination polyhedron is essential to understanding chemical structure. Simple polyhedra in crystalline compounds are often deformed due to structural complexity or electronic instabilities so distortion analysis methods are useful. Here we demonstrate that analysis of the minimum bounding ellipsoid of a coordination polyhedron provides a general method for studying distortion, yielding parameters that are sensitive to various orders in metal oxide examples. Ellipsoidal analysis leads to discovery of a general switching of polyhedral distortions at symmetry-disallowed transitions in perovskites that may evidence underlying coordination bistability, and reveals a weak off-centre ‘d5 effect' for Fe3+ ions that could be exploited in multiferroics. Separating electronic distortions from intrinsic deformations within the low temperature superstructure of magnetite provides new insights into the charge and trimeron orders. Ellipsoidal analysis can be useful for exploring local structure in many materials such as coordination complexes and frameworks, organometallics and organic molecules. PMID:28146146

  17. Critical factors determining the quantification capability of matrix-assisted laser desorption/ionization– time-of-flight mass spectrometry

    PubMed Central

    Wang, Chia-Chen; Lai, Yin-Hung; Ou, Yu-Meng; Chang, Huan-Tsung; Wang, Yi-Sheng

    2016-01-01

    Quantitative analysis with mass spectrometry (MS) is important but challenging. Matrix-assisted laser desorption/ionization (MALDI) coupled with time-of-flight (TOF) MS offers superior sensitivity, resolution and speed, but such techniques have numerous disadvantages that hinder quantitative analyses. This review summarizes essential obstacles to analyte quantification with MALDI-TOF MS, including the complex ionization mechanism of MALDI, sensitive characteristics of the applied electric fields and the mass-dependent detection efficiency of ion detectors. General quantitative ionization and desorption interpretations of ion production are described. Important instrument parameters and available methods of MALDI-TOF MS used for quantitative analysis are also reviewed. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644968

  18. Rapid solution of large-scale systems of equations

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.

  19. Analysis of Composite Panels Subjected to Thermo-Mechanical Loads

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1999-01-01

    The results of a detailed study of the effect of cutout on the nonlinear response of curved unstiffened panels are presented. The panels are subjected to combined temperature gradient through-the-thickness combined with pressure loading and edge shortening or edge shear. The analysis is based on a first-order, shear deformation, Sanders-Budiansky-type shell theory with the effects of large displacements, moderate rotations, transverse shear deformation, and laminated anisotropic material behavior included. A mixed formulation is used with the fundamental unknowns consisting of the generalized displacements and the stress resultants of the panel. The nonlinear displacements, strain energy, principal strains, transverse shear stresses, transverse shear strain energy density, and their hierarchical sensitivity coefficients are evaluated. The hierarchical sensitivity coefficients measure the sensitivity of the nonlinear response to variations in the panel parameters, as well as in the material properties of the individual layers. Numerical results are presented for cylindrical panels and show the effects of variations in the loading and the size of the cutout on the global and local response quantities as well as their sensitivity to changes in the various panel, layer, and micromechanical parameters.

  20. Analysis of JPSS J1 VIIRS Polarization Sensitivity Using the NIST T-SIRCUS

    NASA Technical Reports Server (NTRS)

    McIntire, Jeffrey W.; Young, James B.; Moyer, David; Waluschka, Eugene; Oudrari, Hassan; Xiong, Xiaoxiong

    2015-01-01

    The polarization sensitivity of the Joint Polar Satellite System (JPSS) J1 Visible Infrared Imaging Radiometer Suite (VIIRS) measured pre-launch using a broadband source was observed to be larger than expected for many reflective bands. Ray trace modeling predicted that the observed polarization sensitivity was the result of larger diattenuation at the edges of the focal plane filter spectral bandpass. Additional ground measurements were performed using a monochromatic source (the NIST T-SIRCUS) to input linearly polarized light at a number of wavelengths across the bandpass of two VIIRS spectral bands and two scan angles. This work describes the data processing, analysis, and results derived from the T-SIRCUS measurements, comparing them with broadband measurements. Results have shown that the observed degree of linear polarization, when weighted by the sensor's spectral response function, is generally larger on the edges and smaller in the center of the spectral bandpass, as predicted. However, phase angle changes in the center of the bandpass differ between model and measurement. Integration of the monochromatic polarization sensitivity over wavelength produced results consistent with the broadband source measurements, for all cases considered.

  1. Physiologically based pharmacokinetic modeling of a homologous series of barbiturates in the rat: a sensitivity analysis.

    PubMed

    Nestorov, I A; Aarons, L J; Rowland, M

    1997-08-01

    Sensitivity analysis studies the effects of the inherent variability and uncertainty in model parameters on the model outputs and may be a useful tool at all stages of the pharmacokinetic modeling process. The present study examined the sensitivity of a whole-body physiologically based pharmacokinetic (PBPK) model for the distribution kinetics of nine 5-n-alkyl-5-ethyl barbituric acids in arterial blood and 14 tissues (lung, liver, kidney, stomach, pancreas, spleen, gut, muscle, adipose, skin, bone, heart, brain, testes) after i.v. bolus administration to rats. The aims were to obtain new insights into the model used, to rank the model parameters involved according to their impact on the model outputs and to study the changes in the sensitivity induced by the increase in the lipophilicity of the homologues on ascending the series. Two approaches for sensitivity analysis have been implemented. The first, based on the Matrix Perturbation Theory, uses a sensitivity index defined as the normalized sensitivity of the 2-norm of the model compartmental matrix to perturbations in its entries. The second approach uses the traditional definition of the normalized sensitivity function as the relative change in a model state (a tissue concentration) corresponding to a relative change in a model parameter. Autosensitivity has been defined as sensitivity of a state to any of its parameters; cross-sensitivity as the sensitivity of a state to any other states' parameters. Using the two approaches, the sensitivity of representative tissue concentrations (lung, liver, kidney, stomach, gut, adipose, heart, and brain) to the following model parameters: tissue-to-unbound plasma partition coefficients, tissue blood flows, unbound renal and intrinsic hepatic clearance, permeability surface area product of the brain, have been analyzed. Both the tissues and the parameters were ranked according to their sensitivity and impact. The following general conclusions were drawn: (i) the overall sensitivity of the system to all parameters involved is small due to the weak connectivity of the system structure; (ii) the time course of both the auto- and cross-sensitivity functions for all tissues depends on the dynamics of the tissues themselves, e.g., the higher the perfusion of a tissue, the higher are both its cross-sensitivity to other tissues' parameters and the cross-sensitivities of other tissues to its parameters; and (iii) with a few exceptions, there is not a marked influence of the lipophilicity of the homologues on either the pattern or the values of the sensitivity functions. The estimates of the sensitivity and the subsequent tissue and parameter rankings may be extended to other drugs, sharing the same common structure of the whole body PBPK model, and having similar model parameters. Results show also that the computationally simple Matrix Perturbation Analysis should be used only when an initial idea about the sensitivity of a system is required. If comprehensive information regarding the sensitivity is needed, the numerically expensive Direct Sensitivity Analysis should be used.

  2. The utility of hand transplantation in hand amputee patients.

    PubMed

    Alolabi, Noor; Chuback, Jennifer; Grad, Sharon; Thoma, Achilles

    2015-01-01

    To measure the desirable health outcome, termed utility, and the expected quality-adjusted life years (QALYs) gained with hand composite tissue allotransplantation (CTA) using hand amputee patients and the general public. Using the standard gamble (SG) and time trade-off (TTO) techniques, utilities were obtained from 30 general public participants and 12 amputee patients. The health utility and net QALYs gained or lost with transplantation were computed. A sensitivity analysis was conducted to account for the effects of lifelong immunosuppression on the life expectancy of transplant recipients. Higher scores represent greater utility. Hand amputation mean health utility as measured by the SG and TTO methods, respectively, was 0.72 and 0.80 for the general public and 0.69 and 0.70 for hand amputees. In comparison, hand CTA mean health utility was 0.74 and 0.82 for the general public and 0.83 and 0.86 for amputees. Hand CTA imparted an expected gain of 0.9 QALYs (SG and TTO) in the general public and 7.0 (TTO) and 7.8 (SG) QALYs in hand amputees. A loss of at least 1.7 QALYs was demonstrated when decreasing the life expectancy in the sensitivity analysis in the hand amputee group. Hand amputee patients did not show a preference toward hand CTA with its inherent risks. With this procedure being increasingly adopted worldwide, the benefits must be carefully weighed against the risks of lifelong immunosuppressive therapy. This study does not show clear benefit to advocate hand CTA. Copyright © 2015 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  3. IgE sensitization in relation to preschool eczema and filaggrin mutation.

    PubMed

    Johansson, Emma Kristin; Bergström, Anna; Kull, Inger; Lind, Tomas; Söderhäll, Cilla; van Hage, Marianne; Wickman, Magnus; Ballardini, Natalia; Wahlgren, Carl-Fredrik

    2017-12-01

    Eczema (atopic dermatitis) is associated with an increased risk of having IgE antibodies. IgE sensitization can occur through an impaired skin barrier. Filaggrin gene (FLG) mutation is associated with eczema and possibly also with IgE sensitization. We sought to explore the longitudinal relation between preschool eczema (PSE), FLG mutation, or both and IgE sensitization in childhood. A total of 3201 children from the BAMSE (Children Allergy Milieu Stockholm Epidemiology) birth cohort recruited from the general population were included. Regular parental questionnaires identified children with eczema. Blood samples were collected at 4, 8, and 16 years of age for analysis of specific IgE. FLG mutation analysis was performed on 1890 of the children. PSE was associated with IgE sensitization to both food allergens and aeroallergens up to age 16 years (overall adjusted odds ratio, 2.30; 95% CI, 2.00-2.66). This association was even stronger among children with persistent PSE. FLG mutation was associated with IgE sensitization to peanut at age 4 years (adjusted odds ratio, 1.88; 95% CI, 1.03-3.44) but not to other allergens up to age 16 years. FLG mutation and PSE were not effect modifiers for the association between IgE sensitization and PSE or FLG mutation, respectively. Sensitized children with PSE were characterized by means of polysensitization, but no other specific IgE sensitization patterns were found. PSE is associated with IgE sensitization to both food allergens and aeroallergens up to 16 years of age. FLG mutation is associated with IgE sensitization to peanut but not to other allergens. Sensitized children with preceding PSE are more often polysensitized. Copyright © 2017 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  4. Pain sensitivity mediates the relationship between stress and headache intensity in chronic tension-type headache

    PubMed Central

    Cathcart, Stuart; Bhullar, Navjot; Immink, Maarten; Della Vedova, Chris; Hayball, John

    2012-01-01

    BACKGROUND: A central model for chronic tension-type headache (CTH) posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. The prediction from this model that pain sensitivity mediates the relationship between stress and headache activity has not yet been examined. OBJECTIVE: To determine whether pain sensitivity mediates the relationship between stress and prospective headache activity in CTH sufferers. METHOD: Self-reported stress, pain sensitivity and prospective headache activity were measured in 53 CTH sufferers recruited from the general population. Pain sensitivity was modelled as a mediator between stress and headache activity, and tested using a nonparametric bootstrap analysis. RESULTS: Pain sensitivity significantly mediated the relationship between stress and headache intensity. CONCLUSIONS: The results of the present study support the central model for CTH, which posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. Implications for the mechanisms and treatment of CTH are discussed. PMID:23248808

  5. Regional vs. general anesthesia for total knee and hip replacement: An analysis of postoperative pain perception from the international PAIN OUT registry.

    PubMed

    Donauer, Katharina; Bomberg, Hagen; Wagenpfeil, Stefan; Volk, Thomas; Meissner, Winfried; Wolf, Alexander

    2018-05-14

    Total hip and knee replacements are common surgeries, and an optimal pain treatment is essential for early rehabilitation. Since data from randomized controlled trails on the use of regional anesthesia in joint replacements of the lower extremities are conflicting, we analyzed the international PAIN OUT registry for comparison of regional anesthesia vs. general anesthesia regarding pain and morphine consumption on the first postoperative day. International Classification of Diseases-9 (ICD-9) codes were used to identify 2,346 cases for knee and 2,315 for hip arthroplasty between 2010 and 2016 from the PAIN OUT registry. Those were grouped according to anesthesia provided (general, regional, and a combination of both). At the first day after surgery, pain levels and opioid consumption was compared. Adjusted odds ratios (aOR [95% CI]) were calculated with logistic regression and propensity matching was used as a sensitivity analysis. After adjustment for confounders, regional anesthesia was associated with reduced opioid consumption (0.20 [0.13-0.30], p<0.001) and less pain (0.53 [0.36-0.78], p=0.001) than general anesthesia in knee surgery. In hip surgery, regional anesthesia was only associated with reduced opioid consumption (0.17 [0.11-0.26], p<0.001), whereas pain was comparable (1.23 [0.94-1.61], p=0.1). Results from a propensity-matched sensitivity analysis were similar. In total knee arthroplasty, regional anesthesia was associated with less pain and lower opioid consumption. In total hip arthroplasty, regional anesthesia was associated with a lower opioid consumption, however not with reduced pain levels. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  6. Modeling Nitrogen Dynamics in a Waste Stabilization Pond System Using Flexible Modeling Environment with MCMC

    PubMed Central

    Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V.; Petway, Joy R.

    2017-01-01

    This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH3-N and NO3-N. Results indicate that the integrated FME-GLUE-based model, with good Nash–Sutcliffe coefficients (0.53–0.69) and correlation coefficients (0.76–0.83), successfully simulates the concentrations of ON-N, NH3-N and NO3-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH3-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO3-N simulation, which was measured using global sensitivity. PMID:28704958

  7. Methylation-sensitive amplified polymorphism analysis of Verticillium wilt-stressed cotton (Gossypium).

    PubMed

    Wang, W; Zhang, M; Chen, H D; Cai, X X; Xu, M L; Lei, K Y; Niu, J H; Deng, L; Liu, J; Ge, Z J; Yu, S X; Wang, B H

    2016-10-06

    In this study, a methylation-sensitive amplification polymorphism analysis system was used to analyze DNA methylation level in three cotton accessions. Two disease-sensitive near-isogenic lines, PD94042 and IL41, and one disease-resistant Gossypium mustelinum accession were exposed to Verticillium wilt, to investigate molecular disease resistance mechanisms in cotton. We observed multiple different DNA methylation types across the three accessions following Verticillium wilt exposure. These included hypomethylation, hypermethylation, and other patterns. In general, the global DNA methylation level was significantly increased in the disease-resistant accession G. mustelinum following disease exposure. In contrast, there was no significant difference in the disease-sensitive accession PD94042, and a significant decrease was observed in IL41. Our results suggest that disease-resistant cotton might employ a mechanism to increase methylation level in response to disease stress. The differing methylation patterns, together with the increase in global DNA methylation level, might play important roles in tolerance to Verticillium wilt in cotton. Through cloning and analysis of differently methylated DNA sequences, we were also able to identify several genes that may contribute to disease resistance in cotton. Our results revealed the effect of DNA methylation on cotton disease resistance, and also identified genes that played important roles, which may shed light on the future cotton disease-resistant molecular breeding.

  8. Review-of-systems questionnaire as a predictive tool for psychogenic nonepileptic seizures.

    PubMed

    Robles, Liliana; Chiang, Sharon; Haneef, Zulfi

    2015-04-01

    Patients with refractory epilepsy undergo video-electroencephalography for seizure characterization, among whom approximately 10-30% will be discharged with the diagnosis of psychogenic nonepileptic seizures (PNESs). Clinical PNES predictors have been described but in general are not sensitive or specific. We evaluated whether multiple complaints in a routine review-of-system (ROS) questionnaire could serve as a sensitive and specific marker of PNESs. We performed a retrospective analysis of a standardized ROS questionnaire completed by patients with definite PNESs and epileptic seizures (ESs) diagnosed in our adult epilepsy monitoring unit. A multivariate analysis of covariance (MANCOVA) was used to determine whether groups with PNES and ES differed with respect to the percentage of complaints in the ROS questionnaire. Tenfold cross-validation was used to evaluate the predictive error of a logistic regression classifier for PNES status based on the percentage of positive complaints in the ROS questionnaire. A total of 44 patients were included for analysis. Patients with PNESs had a significantly higher number of complaints in the ROS questionnaire compared to patients with epilepsy. A threshold of 17% positive complaints achieved a 78% specificity and 85% sensitivity for discriminating between PNESs and ESs. We conclude that the routine ROS questionnaire may be a sensitive and specific predictive tool for discriminating between PNESs and ESs. Published by Elsevier Inc.

  9. Estuarine sediment toxicity tests on diatoms: Sensitivity comparison for three species

    NASA Astrophysics Data System (ADS)

    Moreno-Garrido, Ignacio; Lubián, Luis M.; Jiménez, Begoña; Soares, Amadeu M. V. M.; Blasco, Julián

    2007-01-01

    Experimental populations of three marine and estuarine diatoms were exposed to sediments with different levels of pollutants, collected from the Aveiro Lagoon (NW of Portugal). The species selected were Cylindrotheca closterium, Phaeodactylum tricornutum and Navicula sp. Previous experiments were designed to determine the influence of the sediment particle size distribution on growth of the species assayed. Percentage of silt-sized sediment affect to growth of the selected species in the experimental conditions: the higher percentage of silt-sized sediment, the lower growth. In any case, percentages of silt-sized sediment less than 10% did not affect growth. In general, C. closterium seems to be slightly more sensitive to the selected sediments than the other two species. Two groups of sediment samples were determined as a function of the general response of the exposed microalgal populations: three of the six samples used were more toxic than the other three. Chemical analysis of the samples was carried out in order to determine the specific cause of differences in toxicity. After a statistical analysis, concentrations of Sn, Zn, Hg, Cu and Cr (among all physico-chemical analyzed parameters), in order of importance, were the most important factors that divided the two groups of samples (more and less toxic samples). Benthic diatoms seem to be sensitive organisms in sediment toxicity tests. Toxicity data from bioassays involving microphytobentos should be taken into account when environmental risks are calculated.

  10. Landscape Analysis of Nutrition-sensitive Agriculture Policy Development in Senegal.

    PubMed

    Lachat, Carl; Nago, Eunice; Ka, Abdoulaye; Vermeylen, Harm; Fanzo, Jessica; Mahy, Lina; Wüstefeld, Marzella; Kolsteren, Patrick

    2015-06-01

    Unlocking the agricultural potential of Africa offers a genuine opportunity to address malnutrition and drive development of the continent. Using Senegal as a case study, to identify gaps and opportunities to strengthen agricultural policies with nutrition-sensitive approaches. We carried out a systematic analysis of 13 policy documents that related to food production, agriculture, food security, or nutrition. Next, we collected data during a participatory analysis with 32 national stakeholders and in-depth interviews with 15 national experts of technical directorates of the different ministries that deal with agriculture and food production. The current agricultural context has various elements that are considered to enhance its nutrition sensitivity. On average, 8.3 of the 17 Food and Agriculture Organization guiding principles for agriculture programming for nutrition were included in the policies reviewed. Ensuring food security and increasing dietary diversity were considered to be the principal objectives of agricultural policies. Although there was considerable agreement that agriculture can contribute to nutrition, current agricultural programs generally do not target communities on the basis of their nutritional vulnerability. Agricultural programs were reported to have specific components to target female beneficiaries but were generally not used as delivery platforms for nutritional interventions. The findings of this study indicate the need for a coherent policy environment across the food system that aligns recommendations at the national level with local action on the ground. In addition, specific activities are needed to develop a shared understanding of nutrition and public health nutrition within the agricultural community in Senegal. © The Author(s) 2015.

  11. Global sensitivity analysis of GEOS-Chem modeled ozone and hydrogen oxides during the INTEX campaigns

    NASA Astrophysics Data System (ADS)

    Christian, Kenneth E.; Brune, William H.; Mao, Jingqiu; Ren, Xinrong

    2018-02-01

    Making sense of modeled atmospheric composition requires not only comparison to in situ measurements but also knowing and quantifying the sensitivity of the model to its input factors. Using a global sensitivity method involving the simultaneous perturbation of many chemical transport model input factors, we find the model uncertainty for ozone (O3), hydroxyl radical (OH), and hydroperoxyl radical (HO2) mixing ratios, and apportion this uncertainty to specific model inputs for the DC-8 flight tracks corresponding to the NASA Intercontinental Chemical Transport Experiment (INTEX) campaigns of 2004 and 2006. In general, when uncertainties in modeled and measured quantities are accounted for, we find agreement between modeled and measured oxidant mixing ratios with the exception of ozone during the Houston flights of the INTEX-B campaign and HO2 for the flights over the northernmost Pacific Ocean during INTEX-B. For ozone and OH, modeled mixing ratios were most sensitive to a bevy of emissions, notably lightning NOx, various surface NOx sources, and isoprene. HO2 mixing ratios were most sensitive to CO and isoprene emissions as well as the aerosol uptake of HO2. With ozone and OH being generally overpredicted by the model, we find better agreement between modeled and measured vertical profiles when reducing NOx emissions from surface as well as lightning sources.

  12. Patient experience of general practice and use of emergency hospital services in England: regression analysis of national cross-sectional time series data.

    PubMed

    Cowling, Thomas E; Majeed, Azeem; Harris, Matthew J

    2018-01-22

    The UK Government has introduced several national policies to improve access to primary care. We examined associations between patient experience of general practice and rates of visits to accident and emergency (A&E) departments and emergency hospital admissions in England. The study included 8124 general practices between 2011-2012 and 2013-2014. Outcome measures were annual rates of A&E visits and emergency admissions by general practice population, according to administrative hospital records. Explanatory variables included three patient experience measures from the General Practice Patient Survey: practice-level means of experience of making an appointment, satisfaction with opening hours and overall experience (on 0-100 scales). The main analysis used random-effects Poisson regression for cross-sectional time series. Five sensitivity analyses examined changes in model specification. Mean practice-level rates of A&E visits and emergency admissions increased from 2011-2012 to 2013-2014 (310.3-324.4 and 98.8-102.9 per 1000 patients). Each patient experience measure decreased; for example, mean satisfaction with opening hours was 79.4 in 2011-2012 and 76.6 in 2013-2014. In the adjusted regression analysis, an SD increase in experience of making appointments (equal to 9 points) predicted decreases of 1.8% (95% CI -2.4% to -1.2%) in A&E visit rates and 1.4% (95% CI -1.9% to -0.9%) in admission rates. This equalled 301 174 fewer A&E visits and 74 610 fewer admissions nationally per year. Satisfaction with opening hours and overall experience were not consistently associated with either outcome measure across the main and sensitivity analyses. Associations between patient experience of general practice and use of emergency hospital services were small or inconsistent. In England, realistic short-term improvements in patient experience of general practice may only have modest effects on A&E visits and emergency admissions. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  13. (abstract) Using TOPEX/Poseidon Sea Level Observations to Test the Sensitivity of an Ocean Model to Wind Forcing

    NASA Technical Reports Server (NTRS)

    Fu, Lee-Lueng; Chao, Yi

    1996-01-01

    It has been demonstrated that current-generation global ocean general circulation models (OGCM) are able to simulate large-scale sea level variations fairly well. In this study, a GFDL/MOM-based OGCM was used to investigate its sensitivity to different wind forcing. Simulations of global sea level using wind forcing from the ERS-1 Scatterometer and the NMC operational analysis were compared to the observations made by the TOPEX/Poseidon (T/P) radar altimeter for a two-year period. The result of the study has demonstrated the sensitivity of the OGCM to the quality of wind forcing, as well as the synergistic use of two spaceborne sensors in advancing the study of wind-driven ocean dynamics.

  14. Sensitivity analysis of machine-learning models of hydrologic time series

    NASA Astrophysics Data System (ADS)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  15. Field-sensitivity To Rheological Parameters

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan; Ewoldt, Randy

    2017-11-01

    We ask this question: where in a flow is a quantity of interest Q quantitatively sensitive to the model parameters θ-> describing the rheology of the fluid? This field sensitivity is computed via the numerical solution of the adjoint flow equations, as developed to expose the target sensitivity δQ / δθ-> (x) via the constraint of satisfying the flow equations. Our primary example is a sphere settling in Carbopol, for which we have experimental data. For this Carreau-model configuration, we simultaneously calculate how much a local change in the fluid intrinsic time-scale λ, limit-viscosities ηo and η∞, and exponent n would affect the drag D. Such field sensitivities can show where different fluid physics in the model (time scales, elastic versus viscous components, etc.) are important for the target observable and generally guide model refinement based on predictive goals. In this case, the computational cost of solving the local sensitivity problem is negligible relative to the flow. The Carreau-fluid/sphere example is illustrative; the utility of field sensitivity is in the design and analysis of less intuitive flows, for which we provide some additional examples.

  16. Generalized Self-Organizing Maps for Automatic Determination of the Number of Clusters and Their Multiprototypes in Cluster Analysis.

    PubMed

    Gorzalczany, Marian B; Rudzinski, Filip

    2017-06-07

    This paper presents a generalization of self-organizing maps with 1-D neighborhoods (neuron chains) that can be effectively applied to complex cluster analysis problems. The essence of the generalization consists in introducing mechanisms that allow the neuron chain--during learning--to disconnect into subchains, to reconnect some of the subchains again, and to dynamically regulate the overall number of neurons in the system. These features enable the network--working in a fully unsupervised way (i.e., using unlabeled data without a predefined number of clusters)--to automatically generate collections of multiprototypes that are able to represent a broad range of clusters in data sets. First, the operation of the proposed approach is illustrated on some synthetic data sets. Then, this technique is tested using several real-life, complex, and multidimensional benchmark data sets available from the University of California at Irvine (UCI) Machine Learning repository and the Knowledge Extraction based on Evolutionary Learning data set repository. A sensitivity analysis of our approach to changes in control parameters and a comparative analysis with an alternative approach are also performed.

  17. Increased incidence of head and neck cancer in liver transplant recipients: a meta-analysis.

    PubMed

    Liu, Qian; Yan, Lifeng; Xu, Cheng; Gu, Aihua; Zhao, Peng; Jiang, Zhao-Yan

    2014-10-22

    It is unclear whether liver transplantation is associated with an increased incidence of post-transplant head and neck cancer. This comprehensive meta-analysis evaluated the association between liver transplantation and the risk of head and neck cancer using data from all available studies. PubMed and Web of Science were systematically searched to identify all relevant publications up to March 2014. Standardized incidence ratio (SIR) and 95% confidence intervals (CIs) for risk of head and neck cancer in liver transplant recipients were calculated. Tests for heterogeneity, sensitivity, and publishing bias were also performed. Of the 964 identified articles, 10 were deemed eligible. These studies included data on 56,507 patients with a total follow-up of 129,448.9 patient-years. SIR for head and neck cancer was 3.836-fold higher (95% CI 2.754-4.918, P = 0.000) in liver transplant recipients than in the general population. No heterogeneity or publication bias was observed. Sensitivity analysis indicated that omission of any of the studies resulted in an SIR for head and neck cancer between 3.488 (95% CI: 2.379-4.598) and 4.306 (95% CI: 3.020-5.592). Liver transplant recipients are at higher risk of developing head and neck cancer than the general population.

  18. Early, current and past pet ownership: associations with sensitization, bronchial responsiveness and allergic symptoms in school children.

    PubMed

    Anyo, G; Brunekreef, B; de Meer, G; Aarts, F; Janssen, N A H; van Vliet, P

    2002-03-01

    Studies have suggested that early contact with pets may prevent the development of allergy and asthma. To study the association between early, current and past pet ownership and sensitization, bronchial responsiveness and allergic symptoms in school children. A population of almost 3000 primary school children was investigated using protocols of the International Study on Asthma and Allergies in Childhood (ISAAC). Allergic symptoms were measured using the parent-completed ISAAC questionnaire. Sensitization to common allergens was measured using skin prick tests (SPT)s and/or serum immunoglobulin (Ig)E determinations. Bronchial responsiveness was tested using a hypertonic saline challenge. Pet ownership was investigated by questionnaire. Current, past and early exposure to pets was documented separately for cats, dogs, rodents and birds. The data on current, past and early pet exposure were then related to allergic symptoms, sensitization and bronchial responsiveness. Among children currently exposed to pets, there was significantly less sensitization to cat (odds ratio (OR) = 0.69) and dog (OR = 0.63) allergens, indoor allergens in general (OR = 0.64), and outdoor allergens (OR = 0.60) compared to children who never had pets in the home. There was also less hayfever (OR = 0.66) and rhinitis (OR = 0.76). In contrast, wheeze, asthma and bronchial responsiveness were not associated with current pet ownership. Odds ratios associated with past pet ownership were generally above unity, and significant for asthma in the adjusted analysis (OR = 1.85), suggesting selective avoidance in families with sensitized and/or symptomatic children. Pet ownership in the first two years of life only showed an inverse association with sensitization to pollen: OR = 0.71 for having had furry or feathery pets in general in the first two years of life, and OR = 0.73 for having had cats and/or dogs in the first two years of life, compared to not having had pets in the first two years of life. These results suggest that the inverse association between current pet ownership and sensitization and hayfever symptoms was partly due to the removal of pets in families with sensitized and/or symptomatic children. Pet ownership in the first two years of life only seemed to offer some protection against sensitization to pollen.

  19. Smoking paradox in the development of psoriatic arthritis among patients with psoriasis: a population-based study.

    PubMed

    Nguyen, Uyen-Sa D T; Zhang, Yuqing; Lu, Na; Louie-Gao, Qiong; Niu, Jingbo; Ogdie, Alexis; Gelfand, Joel M; LaValley, Michael P; Dubreuil, Maureen; Sparks, Jeffrey A; Karlson, Elizabeth W; Choi, Hyon K

    2018-01-01

    Smoking is associated with an increased risk of psoriatic arthritis (PsA) in the general population, but not among patients with psoriasis. We sought to clarify the possible methodological mechanisms behind this paradox. Using 1995-2015 data from The Health Improvement Network, we performed survival analysis to examine the association between smoking and incident PsA in the general population and among patients with psoriasis. We clarified the paradox using mediation analysis and conducted bias sensitivity analyses to evaluate the potential impact of index event bias and quantify its magnitude from uncontrolled/unmeasured confounders. Of 6.65 million subjects without PsA at baseline, 225 213 participants had psoriasis and 7057 developed incident PsA. Smoking was associated with an increased risk of PsA in the general population (HR 1.27; 95% CI 1.19 to 1.36), but with a decreased risk among patients with psoriasis (HR 0.91; 95% CI 0.84 to 0.99). Mediation analysis showed that the effect of smoking on the risk of PsA was mediated almost entirely through its effect on psoriasis. Bias-sensitivity analyses indicated that even when the relation of uncontrolled confounders to either smoking or PsA was modest (both HRs=~1.5), it could reverse the biased effect of smoking among patients with psoriasis (HR=0.9). In this large cohort representative of the UK general population, smoking was positively associated with PsA risk in the general population, but negatively associated among patients with psoriasis. Conditioning on a causal intermediate variable (psoriasis) may even reverse the association between smoking and PsA, potentially explaining the smoking paradox for the risk of PsA among patients with psoriasis. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  20. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    PubMed

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  1. Derivatives of buckling loads and vibration frequencies with respect to stiffness and initial strain parameters

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Cohen, Gerald A.; Mroz, Zenon

    1990-01-01

    A uniform variational approach to sensitivity analysis of vibration frequencies and bifurcation loads of nonlinear structures is developed. Two methods of calculating the sensitivities of bifurcation buckling loads and vibration frequencies of nonlinear structures, with respect to stiffness and initial strain parameters, are presented. A direct method requires calculation of derivatives of the prebuckling state with respect to these parameters. An adjoint method bypasses the need for these derivatives by using instead the strain field associated with the second-order postbuckling state. An operator notation is used and the derivation is based on the principle of virtual work. The derivative computations are easily implemented in structural analysis programs. This is demonstrated by examples using a general purpose, finite element program and a shell-of-revolution program.

  2. Multiple Chemical Sensitivity

    PubMed Central

    Rossi, Sabrina; Pitidis, Alessio

    2018-01-01

    Objective: Systematic bibliography analysis of about the last 17 years on multiple chemical sensitivity (MCS) was carried out in order to detect new diagnostic and epidemiological evidence. The MCS is a complex syndrome that manifests as a result of exposure to a low level of various common contaminants. The etiology, diagnosis, and treatment are still debated among researchers. Method: Querying PubMed, Web of Science, Scopus, Cochrane library, both using some specific MESH terms combined with MESH subheadings and through free search, even by Google. Results: The studies were analyzed by verifying 1) the typology of study design; 2) criteria for case definition; 3) presence of attendances in the emergency departments and hospital admissions, and 4) analysis of the risk factors. Outlook: With this review, we give some general considerations and hypothesis for possible future research. PMID:29111991

  3. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  4. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    PubMed

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  5. Effects of Incidental Emotions on Moral Dilemma Judgments: An Analysis Using the CNI Model.

    PubMed

    Gawronski, Bertram; Conway, Paul; Armstrong, Joel; Friesdorf, Rebecca; Hütter, Mandy

    2018-02-01

    Effects of incidental emotions on moral dilemma judgments have garnered interest because they demonstrate the context-dependent nature of moral decision-making. Six experiments (N = 727) investigated the effects of incidental happiness, sadness, and anger on responses in moral dilemmas that pit the consequences of a given action for the greater good (i.e., utilitarianism) against the consistency of that action with moral norms (i.e., deontology). Using the CNI model of moral decision-making, we further tested whether the three kinds of emotions shape moral dilemma judgments by influencing (a) sensitivity to consequences, (b) sensitivity to moral norms, or (c) general preference for inaction versus action regardless of consequences and moral norms (or some combination of the three). Incidental happiness reduced sensitivity to moral norms without affecting sensitivity to consequences or general preference for inaction versus action. Incidental sadness and incidental anger did not show any significant effects on moral dilemma judgments. The findings suggest a central role of moral norms in the contribution of emotional responses to moral dilemma judgments, requiring refinements of dominant theoretical accounts and supporting the value of formal modeling approaches in providing more nuanced insights into the determinants of moral dilemma judgments. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. Comparative analytical evaluation of the respiratory TaqMan Array Card with real-time PCR and commercial multi-pathogen assays.

    PubMed

    Harvey, John J; Chester, Stephanie; Burke, Stephen A; Ansbro, Marisela; Aden, Tricia; Gose, Remedios; Sciulli, Rebecca; Bai, Jing; DesJardin, Lucy; Benfer, Jeffrey L; Hall, Joshua; Smole, Sandra; Doan, Kimberly; Popowich, Michael D; St George, Kirsten; Quinlan, Tammy; Halse, Tanya A; Li, Zhen; Pérez-Osorio, Ailyn C; Glover, William A; Russell, Denny; Reisdorf, Erik; Whyte, Thomas; Whitaker, Brett; Hatcher, Cynthia; Srinivasan, Velusamy; Tatti, Kathleen; Tondella, Maria Lucia; Wang, Xin; Winchell, Jonas M; Mayer, Leonard W; Jernigan, Daniel; Mawle, Alison C

    2016-02-01

    In this study, a multicenter evaluation of the Life Technologies TaqMan(®) Array Card (TAC) with 21 custom viral and bacterial respiratory assays was performed on the Applied Biosystems ViiA™ 7 Real-Time PCR System. The goal of the study was to demonstrate the analytical performance of this platform when compared to identical individual pathogen specific laboratory developed tests (LDTs) designed at the Centers for Disease Control and Prevention (CDC), equivalent LDTs provided by state public health laboratories, or to three different commercial multi-respiratory panels. CDC and Association of Public Health Laboratories (APHL) LDTs had similar analytical sensitivities for viral pathogens, while several of the bacterial pathogen APHL LDTs demonstrated sensitivities one log higher than the corresponding CDC LDT. When compared to CDC LDTs, TAC assays were generally one to two logs less sensitive depending on the site performing the analysis. Finally, TAC assays were generally more sensitive than their counterparts in three different commercial multi-respiratory panels. TAC technology allows users to spot customized assays and design TAC layout, simplify assay setup, conserve specimen, dramatically reduce contamination potential, and as demonstrated in this study, analyze multiple samples in parallel with good reproducibility between instruments and operators. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. A Generalized Eulerian-Lagrangian Analysis, with Application to Liquid Flows with Vapor Bubbles

    NASA Technical Reports Server (NTRS)

    Dejong, Frederik J.; Meyyappan, Meyya

    1993-01-01

    Under a NASA MSFC SBIR Phase 2 effort an analysis has been developed for liquid flows with vapor bubbles such as those in liquid rocket engine components. The analysis is based on a combined Eulerian-Lagrangian technique, in which Eulerian conservation equations are solved for the liquid phase, while Lagrangian equations of motion are integrated in computational coordinates for the vapor phase. The novel aspect of the Lagrangian analysis developed under this effort is that it combines features of the so-called particle distribution approach with those of the so-called particle trajectory approach and can, in fact, be considered as a generalization of both of those traditional methods. The result of this generalization is a reduction in CPU time and memory requirements. Particle time step (stability) limitations have been eliminated by semi-implicit integration of the particle equations of motion (and, for certain applications, the particle temperature equation), although practical limitations remain in effect for reasons of accuracy. The analysis has been applied to the simulation of cavitating flow through a single-bladed section of a labyrinth seal. Models for the simulation of bubble formation and growth have been included, as well as models for bubble drag and heat transfer. The results indicate that bubble formation is more or less 'explosive'. for a given flow field, the number density of bubble nucleation sites is very sensitive to the vapor properties and the surface tension. The bubble motion, on the other hand, is much less sensitive to the properties, but is affected strongly by the local pressure gradients in the flow field. In situations where either the material properties or the flow field are not known with sufficient accuracy, parametric studies can be carried out rapidly to assess the effect of the important variables. Future work will include application of the analysis to cavitation in inducer flow fields.

  8. Sensory-processing sensitivity in social anxiety disorder: Relationship to harm avoidance and diagnostic subtypes

    PubMed Central

    Hofmann, Stefan G.; Bitran, Stella

    2007-01-01

    Sensory-processing sensitivity is assumed to be a heritable vulnerability factor for shyness. The present study is the first to examine sensory-processing sensitivity among individuals with social anxiety disorder. The results showed that the construct is separate from social anxiety, but it is highly correlated with harm avoidance and agoraphobic avoidance. Individuals with a generalized subtype of social anxiety disorder reported higher levels of sensory-processing sensitivity than individuals with a non-generalized subtype. These preliminary findings suggest that sensory-processing sensitivity is uniquely associated with the generalized subtype of social anxiety disorder. Recommendations for future research are discussed. PMID:17241764

  9. An evaluation of fish early life stage tests for predicting reproductive and longer-term toxicity from plant protection product active substances.

    PubMed

    Wheeler, James R; Maynard, Samuel K; Crane, Mark

    2014-08-01

    The chronic toxicity of chemicals to fish is routinely assessed by using fish early life stage (ELS) test results. Fish full life cycle (FLC) tests are generally required only when toxicity, bioaccumulation, and persistence triggers are met or when there is a suspicion of potential endocrine-disrupting properties. This regulatory approach is based on a relationship between the results of fish ELS and FLC studies first established more than 35 yrs ago. Recently, this relationship has been challenged by some regulatory authorities, and it has been recommended that more substances should undergo FLC testing. In addition, a project proposal has been submitted to the Organisation for Economic Cooperation and Development (OECD) to develop a fish partial life cycle (PLC) test including a reproductive assessment. Both FLC and PLC tests are animal- and resource-intensive and technically challenging and should therefore be undertaken only if there is clear evidence that they are necessary for coming to a regulatory decision. The present study reports on an analysis of a database of paired fish ELS and FLC endpoints for plant protection product active substances from European Union draft assessment reports and the US Environmental Protection Agency Office of Pesticide Programs Pesticide Ecotoxicity Database. Analysis of this database shows a clear relationship between ELS and FLC responses, with similar median sensitivity across substances when no-observed-effect concentrations (NOECs) are compared. There was also no indication that classification of a substance as a mammalian reproductive toxicant leads to more sensitive effects in fish FLC tests than in ELS tests. Indeed, the response of the ELS tests was generally more sensitive than the most sensitive reproduction NOEC from a FLC test. This analysis indicates that current testing strategies and guidelines are fit for purpose and that there is no need for fish full or partial life cycle tests for most plant protection product active substances. © 2014 SETAC.

  10. Health economics analysis of insulin aspart vs. regular human insulin in type 2 diabetes patients, based on observational real life evidence from general practices in Germany.

    PubMed

    Liebl, A; Seitz, L; Palmer, A J

    2014-10-01

    A retrospective analysis of German general practice data demonstrated that insulin aspart (IA) was associated with a significantly reduced incidence of macrovascular events (MVE: stroke, myocardial infarction, peripheral vascular disease or coronary heart disease) vs. regular human insulin (RHI) in type 2 diabetes patients. Economic implications, balanced against potential improvements in quality-adjusted life years (QALYs) resulting from lower risks of complications with IA in this setting have not yet been explored. A decision analysis model was developed utilizing 3-year initial MVE rates for each comparator, combined with published German-specific insulin and MVE costs and health utilities to calculate number needed to treat (NNT) to avoid any MVE, incremental costs and QALYs gained/ person for IA vs. RHI. A 3-year time horizon and German 3(rd)-party payer perspective were used. Probabilistic sensitivity analysis was performed, sampling from distributions of key parameters. Additional sensitivity analyses were performed. NNT over a 3 year period to avoid any MVE was 8 patients for IA vs. RHI. Due to lower MVE rates, IA dominated RHI with 0.020 QALYs gained (95% confidence interval: 0.014-0.025) and cost savings of EUR 1 556 (1 062-2 076)/person for IA vs. RHI over the 3-year time horizon. Sensitivity analysis revealed that IA would still be overall cost saving even if the cost of IA was double the cost/unit of RHI. From a health economics perspective, IA was the superior alternative for the insulin treatment of type 2 diabetes, with lower incidence of MVE events translating to improved QALYs and lower costs vs. RHI within a 3-year time horizon. © J. A. Barth Verlag in Georg Thieme Verlag KG Stuttgart · New York.

  11. Functional Specificity and Sex Differences in the Neural Circuits Supporting the Inhibition of Automatic Imitation.

    PubMed

    Darda, Kohinoor M; Butler, Emily E; Ramsey, Richard

    2018-06-01

    Humans show an involuntary tendency to copy other people's actions. Although automatic imitation builds rapport and affiliation between individuals, we do not copy actions indiscriminately. Instead, copying behaviors are guided by a selection mechanism, which inhibits some actions and prioritizes others. To date, the neural underpinnings of the inhibition of automatic imitation and differences between the sexes in imitation control are not well understood. Previous studies involved small sample sizes and low statistical power, which produced mixed findings regarding the involvement of domain-general and domain-specific neural architectures. Here, we used data from Experiment 1 ( N = 28) to perform a power analysis to determine the sample size required for Experiment 2 ( N = 50; 80% power). Using independent functional localizers and an analysis pipeline that bolsters sensitivity, during imitation control we show clear engagement of the multiple-demand network (domain-general), but no sensitivity in the theory-of-mind network (domain-specific). Weaker effects were observed with regard to sex differences, suggesting that there are more similarities than differences between the sexes in terms of the neural systems engaged during imitation control. In summary, neurocognitive models of imitation require revision to reflect that the inhibition of imitation relies to a greater extent on a domain-general selection system rather than a domain-specific system that supports social cognition.

  12. Cancer biomarker discovery is improved by accounting for variability in general levels of drug sensitivity in pre-clinical models.

    PubMed

    Geeleher, Paul; Cox, Nancy J; Huang, R Stephanie

    2016-09-21

    We show that variability in general levels of drug sensitivity in pre-clinical cancer models confounds biomarker discovery. However, using a very large panel of cell lines, each treated with many drugs, we could estimate a general level of sensitivity to all drugs in each cell line. By conditioning on this variable, biomarkers were identified that were more likely to be effective in clinical trials than those identified using a conventional uncorrected approach. We find that differences in general levels of drug sensitivity are driven by biologically relevant processes. We developed a gene expression based method that can be used to correct for this confounder in future studies.

  13. Theory of buckling and post-buckling behavior of elastic structures

    NASA Technical Reports Server (NTRS)

    Budiansky, B.

    1974-01-01

    The present paper provides a unified, general presentation of the basic theory of the buckling and post-buckling behavior of elastic structures in a form suitable for application to a wide variety of special problems. The notation of functional analysis is used for this purpose. Before the general analysis, simple conceptual models are used to elucidate the basic concepts of bifurcation buckling, snap buckling, imperfection sensitivity, load-shortening relations, and stability. The energy approach, the virtual-work approach, and mode interaction are discussed. The derivations and results are applicable to continua and finite-dimensional systems. The virtual-work and energy approaches are given separate treatments, but their equivalence is made explicit. The basic concepts of stability occupy a secondary position in the present approach.

  14. Evaluation of Skylab IB sensitivity to on-pad winds with turbulence

    NASA Technical Reports Server (NTRS)

    Coffin, T.

    1972-01-01

    Computer simulation was performed to estimate displacements and bending moments experienced by the SKYLAB 1B vehicle on the launch pad due to atmospheric winds. The vehicle was assumed to be a beam-like structure represented by a finite number of generalized coordinates. Wind flow across the vehicle was treated as a nonhomogeneous, stationary random process. Response computations were performed by the assumption of simple strip theory and application of generalized harmonic analysis. Displacement and bending moment statistics were obtained for six vehicle propellant loading conditions and four representative reference wind profile and turbulence levels. Means, variances and probability distributions are presented graphically for each case. A separate analysis was performed to indicate the influence of wind gradient variations on vehicle response statistics.

  15. Reauthorization of the Higher Education Act, Title IV General Provisions/Needs Analysis, Volume 4. Hearings before the Subcommittee on Postsecondary Education of the Committee on Education and Labor. House of Representatives, Ninety-Ninth Congress, First Session (July 17, August 1, 1985).

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Committee on Education and Labor.

    Hearings on reauthorization of the Higher Education Act of 1965 focus on the needs analysis system for student aid programs. One proposal recommends restructuring the Pell Grant Program to target its support on low-income students and to make it more sensitive to the costs of different types of colleges. Recommendations include: implementing a…

  16. Selecting focal species as surrogates for imperiled species using relative sensitivities derived from occupancy analysis

    USGS Publications Warehouse

    Silvano, Amy; Guyer, Craig; Steury, Todd; Grand, James B.

    2017-01-01

    Most imperiled species are rare or elusive and difficult to detect, which makes gathering data to estimate their response to habitat restoration a challenge. We used a repeatable, systematic method for selecting focal species using relative sensitivities derived from occupancy analysis. Our objective was to select suites of focal species that would be useful as surrogates when predicting effects of restoration of habitat characteristics preferred by imperiled species. We developed 27 habitat profiles that represent general habitat relationships for 118 imperiled species. We identified 23 regularly encountered species that were sensitive to important aspects of those profiles. We validated our approach by examining the correlation between estimated probabilities of occupancy for species of concern and focal species selected using our method. Occupancy rates of focal species were more related to occupancy rates of imperiled species when they were sensitive to more of the parameters appearing in profiles of imperiled species. We suggest that this approach can be an effective means of predicting responses by imperiled species to proposed management actions. However, adequate monitoring will be required to determine the effectiveness of using focal species to guide management actions.

  17. Comparative sensitivity of quantitative EEG (QEEG) spectrograms for detecting seizure subtypes.

    PubMed

    Goenka, Ajay; Boro, Alexis; Yozawitz, Elissa

    2018-02-01

    To assess the sensitivity of Persyst version 12 QEEG spectrograms to detect focal, focal with secondarily generalized, and generalized onset seizures. A cohort of 562 seizures from 58 patients was analyzed. Successive recordings with 2 or more seizures during continuous EEG monitoring for clinical indications in the ICU or EMU between July 2016 and January 2017 were included. Patient ages ranged from 5 to 64 years (mean = 36 years). There were 125 focal seizures, 187 secondarily generalized and 250 generalized seizures from 58 patients analyzed. Seizures were identified and classified independently by two epileptologists. A correlate to the seizure pattern in the raw EEG was sought in the QEEG spectrograms in 4-6 h EEG epochs surrounding the identified seizures. A given spectrogram was interpreted as indicating a seizure, if at the time of a seizure it showed a visually significant departure from the pre-event baseline. Sensitivities for seizure detection using each spectrogram were determined for each seizure subtype. Overall sensitivities of the QEEG spectrograms for detecting seizures ranged from 43% to 72%, with highest sensitivity (402/562,72%) by the seizure detection trend. The asymmetry spectrogram had the highest sensitivity for detecting focal seizures (117/125,94%). The FFT spectrogram was most sensitive for detecting secondarily generalized seizures (158/187, 84%). The seizure detection trend was the most sensitive for generalized onset seizures (197/250,79%). Our study suggests that different seizure types have specific patterns in the Persyst QEEG spectrograms. Identifying these patterns in the EEG can significantly increase the sensitivity for seizure identification. Copyright © 2018 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.

  18. Computing the sensitivity of drag and lift in flow past a circular cylinder: Time-stepping versus self-consistent analysis

    NASA Astrophysics Data System (ADS)

    Meliga, Philippe

    2017-07-01

    We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to which relevant information can be gained from a hybrid modeling computing self-consistent sensitivities from the postprocessing of DNS data. Application to alternative control objectives such as increasing the lift and alleviating the fluctuating drag and lift is also discussed.

  19. Rainfall or parameter uncertainty? The power of sensitivity analysis on grouped factors

    NASA Astrophysics Data System (ADS)

    Nossent, Jiri; Pereira, Fernando; Bauwens, Willy

    2017-04-01

    Hydrological models are typically used to study and represent (a part of) the hydrological cycle. In general, the output of these models mostly depends on their input rainfall and parameter values. Both model parameters and input precipitation however, are characterized by uncertainties and, therefore, lead to uncertainty on the model output. Sensitivity analysis (SA) allows to assess and compare the importance of the different factors for this output uncertainty. Hereto, the rainfall uncertainty can be incorporated in the SA by representing it as a probabilistic multiplier. Such multiplier can be defined for the entire time series, or several of these factors can be determined for every recorded rainfall pulse or for hydrological independent storm events. As a consequence, the number of parameters included in the SA related to the rainfall uncertainty can be (much) lower or (much) higher than the number of model parameters. Although such analyses can yield interesting results, it remains challenging to determine which type of uncertainty will affect the model output most due to the different weight both types will have within the SA. In this study, we apply the variance based Sobol' sensitivity analysis method to two different hydrological simulators (NAM and HyMod) for four diverse watersheds. Besides the different number of model parameters (NAM: 11 parameters; HyMod: 5 parameters), the setup of our sensitivity and uncertainty analysis-combination is also varied by defining a variety of scenarios including diverse numbers of rainfall multipliers. To overcome the issue of the different number of factors and, thus, the different weights of the two types of uncertainty, we build on one of the advantageous properties of the Sobol' SA, i.e. treating grouped parameters as a single parameter. The latter results in a setup with a single factor for each uncertainty type and allows for a straightforward comparison of their importance. In general, the results show a clear influence of the weights in the different SA scenarios. However, working with grouped factors resolves this issue and leads to clear importance results.

  20. Diagnostic performance of short portable mental status questionnaire for screening dementia among patients attending cognitive assessment clinics in Singapore.

    PubMed

    Malhotra, Chetna; Chan, Angelique; Matchar, David; Seow, Dennis; Chuo, Adeline; Do, Young Kyung

    2013-07-01

    The Short Portable Mental Status Questionnaire (SPMSQ) is a brief cognitive screening instrument, which is easy to use by a healthcare worker with little training. However, the validity of this instrument has not been established in Singapore. Thus, the primary aim of this study was to determine the diagnostic performance of SPMSQ for screening dementia among patients attending outpatient cognitive assessment clinics and to assess whether the appropriate cut-off score varies by patient's age and education. A secondary aim of the study was to map the SPMSQ scores with Mini-Mental State Examination (MMSE) scores. SPMSQ and MMSE were administered by a trained interviewer to 127 patients visiting outpatient cognitive assessment clinics at the Singapore General Hospital, Changi General Hospital and Tan Tock Seng Hospital. The geriatricians at these clinics then diagnosed these patients with dementia or no dementia (reference standard). Sensitivity and specificity of SPMSQ with different cut-off points (number of errors) were calculated and compared to the reference standard using the Receiver Operator Characteristic (ROC) analysis. Correlation coefficient was also calculated between MMSE and SPMSQ scores. Based on the ROC analysis and a balance of sensitivity and specificity, the appropriate cut-off for SPMSQ was found to be 5 or more errors (sensitivity 78%, specificity 75%). The cut-off varied by education, but not by patient's age. There was a high correlation between SPMSQ and MMSE scores (r = 0.814, P <0.0001). Despite the advantage of being a brief screening instrument for dementia, the use of SPMSQ is limited by its low sensitivity and specificity, especially among patients with less than 6 years of education.

  1. Genetic Variation and Combining Ability Analysis of Bruising Sensitivity in Agaricus bisporus

    PubMed Central

    Gao, Wei; Baars, Johan J. P.; Dolstra, Oene; Visser, Richard G. F.; Sonnenberg, Anton S. M.

    2013-01-01

    Advanced button mushroom cultivars that are less sensitive to mechanical bruising are required by the mushroom industry, where automated harvesting still cannot be used for the fresh mushroom market. The genetic variation in bruising sensitivity (BS) of Agaricus bisporus was studied through an incomplete set of diallel crosses to get insight in the heritability of BS and the combining ability of the parental lines used and, in this way, to estimate their breeding value. To this end nineteen homokaryotic lines recovered from wild strains and cultivars were inter-crossed in a diallel scheme. Fifty-one successful hybrids were grown under controlled conditions, and the BS of these hybrids was assessed. BS was shown to be a trait with a very high heritability. The results also showed that brown hybrids were generally less sensitive to bruising than white hybrids. The diallel scheme allowed to estimate the general combining ability (GCA) for each homokaryotic parental line and to estimate the specific combining ability (SCA) of each hybrid. The line with the lowest GCA is seen as the most attractive donor for improving resistance to bruising. The line gave rise to hybrids sensitive to bruising having the highest GCA value. The highest negative SCA possibly indicates heterosis effects for resistance to bruising. This study provides a foundation for estimating breeding value of parental lines to further study the genetic factors underlying bruising sensitivity and other quality-related traits, and to select potential parental lines for further heterosis breeding. The approach of studying combining ability in a diallel scheme was used for the first time in button mushroom breeding. PMID:24116171

  2. CXTFIT/Excel A modular adaptable code for parameter estimation, sensitivity analysis and uncertainty analysis for laboratory or field tracer experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; Mayes, Melanie; Parker, Jack C

    2010-01-01

    We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less

  3. A structured framework for assessing sensitivity to missing data assumptions in longitudinal clinical trials.

    PubMed

    Mallinckrodt, C H; Lin, Q; Molenberghs, M

    2013-01-01

    The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.

  4. Vibration sensitivity of the scanning near-field optical microscope with a tapered optical fiber probe.

    PubMed

    Chang, Win-Jin; Fang, Te-Hua; Lee, Haw-Long; Yang, Yu-Ching

    2005-01-01

    In this paper the Rayleigh-Ritz method was used to study the scanning near-field optical microscope (SNOM) with a tapered optical fiber probe's flexural and axial sensitivity to vibration. Not only the contact stiffness but also the geometric parameters of the probe can influence the flexural and axial sensitivity to vibration. According to the analysis, the lateral and axial contact stiffness had a significant effect on the sensitivity of vibration of the SNOM's probe, each mode had a different level of sensitivity and in the first mode the tapered optical fiber probe was the most acceptive to higher levels of flexural and axial vibration. Generally, when the contact stiffness was lower, the tapered probe was more sensitive to higher levels of both axial and flexural vibration than the uniform probe. However, the situation was reversed when the contact stiffness was larger. Furthermore, the effect that the probe's length and its tapered angle had on the SNOM's probe axial and flexural vibration were significant and these two conditions should be incorporated into the design of new SNOM probes.

  5. Optimizing human activity patterns using global sensitivity analysis.

    PubMed

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  6. Optimizing human activity patterns using global sensitivity analysis

    PubMed Central

    Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2014-01-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080

  7. Flows of dioxins and furans in coastal food webs: inverse modeling, sensitivity analysis, and applications of linear system theory.

    PubMed

    Saloranta, Tuomo M; Andersen, Tom; Naes, Kristoffer

    2006-01-01

    Rate constant bioaccumulation models are applied to simulate the flow of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in the coastal marine food web of Frierfjorden, a contaminated fjord in southern Norway. We apply two different ways to parameterize the rate constants in the model, global sensitivity analysis of the models using Extended Fourier Amplitude Sensitivity Test (Extended FAST) method, as well as results from general linear system theory, in order to obtain a more thorough insight to the system's behavior and to the flow pathways of the PCDD/Fs. We calibrate our models against observed body concentrations of PCDD/Fs in the food web of Frierfjorden. Differences between the predictions from the two models (using the same forcing and parameter values) are of the same magnitude as their individual deviations from observations, and the models can be said to perform about equally well in our case. Sensitivity analysis indicates that the success or failure of the models in predicting the PCDD/F concentrations in the food web organisms highly depends on the adequate estimation of the truly dissolved concentrations in water and sediment pore water. We discuss the pros and cons of such models in understanding and estimating the present and future concentrations and bioaccumulation of persistent organic pollutants in aquatic food webs.

  8. Optimizing human activity patterns using global sensitivity analysis

    DOE PAGES

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; ...

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less

  9. Accuracy of non-invasive prenatal testing using cell-free DNA for detection of Down, Edwards and Patau syndromes: a systematic review and meta-analysis

    PubMed Central

    Taylor-Phillips, Sian; Freeman, Karoline; Geppert, Julia; Agbebiyi, Adeola; Uthman, Olalekan A; Madan, Jason; Clarke, Angus; Quenby, Siobhan; Clarke, Aileen

    2016-01-01

    Objective To measure test accuracy of non-invasive prenatal testing (NIPT) for Down, Edwards and Patau syndromes using cell-free fetal DNA and identify factors affecting accuracy. Design Systematic review and meta-analysis of published studies. Data sources PubMed, Ovid Medline, Ovid Embase and the Cochrane Library published from 1997 to 9 February 2015, followed by weekly autoalerts until 1 April 2015. Eligibility criteria for selecting studies English language journal articles describing case–control studies with ≥15 trisomy cases or cohort studies with ≥50 pregnant women who had been given NIPT and a reference standard. Results 41, 37 and 30 studies of 2012 publications retrieved were included in the review for Down, Edwards and Patau syndromes. Quality appraisal identified high risk of bias in included studies, funnel plots showed evidence of publication bias. Pooled sensitivity was 99.3% (95% CI 98.9% to 99.6%) for Down, 97.4% (95.8% to 98.4%) for Edwards, and 97.4% (86.1% to 99.6%) for Patau syndrome. The pooled specificity was 99.9% (99.9% to 100%) for all three trisomies. In 100 000 pregnancies in the general obstetric population we would expect 417, 89 and 40 cases of Downs, Edwards and Patau syndromes to be detected by NIPT, with 94, 154 and 42 false positive results. Sensitivity was lower in twin than singleton pregnancies, reduced by 9% for Down, 28% for Edwards and 22% for Patau syndrome. Pooled sensitivity was also lower in the first trimester of pregnancy, in studies in the general obstetric population, and in cohort studies with consecutive enrolment. Conclusions NIPT using cell-free fetal DNA has very high sensitivity and specificity for Down syndrome, with slightly lower sensitivity for Edwards and Patau syndrome. However, it is not 100% accurate and should not be used as a final diagnosis for positive cases. Trial registration number CRD42014014947. PMID:26781507

  10. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    NASA Astrophysics Data System (ADS)

    Murata, Isao; Ohta, Masayuki; Kusaka, Sachie; Sato, Fuminobu; Miyamaru, Hiroyuki

    2017-09-01

    There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author's group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is "equally" due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A)) making neutrons conveying the contribution, indirect controbution of neutrons (B) making the neutrons (A) and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  11. Quantifying the Variability in Species' Vulnerability to Ocean Acidification

    NASA Astrophysics Data System (ADS)

    Kroeker, K. J.; Kordas, R. L.; Crim, R.; Gattuso, J.; Hendriks, I.; Singh, G. G.

    2012-12-01

    Ocean acidification represents a threat to marine species and ecosystems worldwide. As such, understanding the potential ecological impacts of acidification is a high priority for science, management, and policy. As research on the biological impacts of ocean acidification continues to expand at an exponential rate, a comprehensive understanding of the generalities and/or variability in organisms' responses and the corresponding levels of certainty of these potential responses is essential. Meta-analysis is a quantitative technique for summarizing the results of primary research studies and provides a transparent method to examine the generalities and/or variability in scientific results across numerous studies. Here, we perform the most comprehensive meta-analysis to date by synthesizing the results of 228 studies examining the biological impacts of ocean acidification. Our results reveal decreased survival, calcification, growth, reproduction and development in response to acidification across a broad range of marine organisms, as well as significant trait-mediated variation among taxonomic groups and enhanced sensitivity among early life history stages. In addition, our results reveal a pronounced sensitivity of molluscs to acidification, especially among the larval stages, and enhanced vulnerability to acidification with concurrent exposure to increased seawater temperatures across a diversity of organisms.

  12. Rules of Thumb for Depth of Investigation, Pseudo-Position and Resolution of the Electrical Resistivity Method from Analysis of the Moments of the Sensitivity Function for a Homogeneous Half-Space

    NASA Astrophysics Data System (ADS)

    Butler, S. L.

    2017-12-01

    The electrical resistivity method is now highly developed with 2D and even 3D surveys routinely performed and with available fast inversion software. However, rules of thumb, based on simple mathematical formulas, for important quantities like depth of investigation, horizontal position and resolution have not previously been available and would be useful for survey planning, preliminary interpretation and general education about the method. In this contribution, I will show that the sensitivity function for the resistivity method for a homogeneous half-space can be analyzed in terms of its first and second moments which yield simple mathematical formulas. The first moment gives the sensitivity-weighted center of an apparent resistivity measurement with the vertical center being an estimate of the depth of investigation. I will show that this depth of investigation estimate works at least as well as previous estimates based on the peak and median of the depth sensitivity function which must be calculated numerically for a general four electrode array. The vertical and horizontal first moments can also be used as pseudopositions when plotting 1, 2 and 3D pseudosections. The appropriate horizontal plotting point for a pseudosection was not previously obvious for nonsymmetric arrays. The second moments of the sensitivity function give estimates of the spatial extent of the region contributing to an apparent resistivity measurement and hence are measures of the resolution. These also have simple mathematical formulas.

  13. Adding retinal photography to screening for diabetic retinopathy: a prospective study in primary care.

    PubMed

    O'Hare, J P; Hopper, A; Madhaven, C; Charny, M; Purewell, T S; Harney, B; Griffiths, J

    1996-03-16

    To evaluate whether adding retinal photography improved community screening for diabetic retinopathy. Mobile screening unit at rural and urban general practices in south west England. 1010 diabetic patients from primary care. Prospective study; patients were examined by ophthalmoscopy by general practitioners or opticians without fundal photographs and again with photographs, and assessments were compared to those of an ophthalmologist. Whether fundal photography improved the sensitivity of detection of retinopathy and referrable diabetic retinopathy, and whether this sensitivity could be improved by including a review of the films by the specialist. Diabetic retinopathy was detected by the ophthalmologist in 205 patients (20.5%) and referrable retinopathy in 49 (4.9%). The sensitivity of the general practitioners and opticians for referrable retinopathy with ophthalmoscopy was 65%, and improved to 84% with retinal photographs. General practitioners' sensitivity in detecting background retinopathy improved with photographs from 22% to 65%; opticians' sensitivity in detecting background retinopathy improved from 43% to 71%. The sensitivity of detecting referrable retinopathy by general practitioners improved from 56% to 80% with photographs; for opticians it improved from 75% to 88%. Combining modalities of screening by providing photography with specialist review of all films in addition to direct ophthalmoscopy through dilated pupils improves assessment and referral for diabetic retinopathy by general practitioners and opticians. With further training and experience, primary care screeners should be able to achieve a sensitivity that will achieve an effective, acceptable, and economical community based screening programme for this condition.

  14. A global analysis of traits predicting species sensitivity to habitat fragmentation

    USGS Publications Warehouse

    Keinath, Douglas; Doak, Daniel F.; Hodges, Karen E.; Prugh, Laura R.; Fagan, William F.; Sekercioglu, Cagan H.; Buchart, Stuart H. M.; Kauffman, Matthew J.

    2017-01-01

    AimElucidating patterns in species responses to habitat fragmentation is an important focus of ecology and conservation, but studies are often geographically restricted, taxonomically narrow or use indirect measures of species vulnerability. We investigated predictors of species presence after fragmentation using data from studies around the world that included all four terrestrial vertebrate classes, thus allowing direct inter-taxonomic comparison.LocationWorld-wide.MethodsWe used generalized linear mixed-effect models in an information theoretic framework to assess the factors that explained species presence in remnant habitat patches (3342 patches; 1559 species, mostly birds; and 65,695 records of patch-specific presence–absence). We developed a novel metric of fragmentation sensitivity, defined as the maximum rate of change in probability of presence with changing patch size (‘Peak Change’), to distinguish between general rarity on the landscape and sensitivity to fragmentation per se.ResultsSize of remnant habitat patches was the most important driver of species presence. Across all classes, habitat specialists, carnivores and larger species had a lower probability of presence, and those effects were substantially modified by interactions. Sensitivity to fragmentation (measured by Peak Change) was influenced primarily by habitat type and specialization, but also by fecundity, life span and body mass. Reptiles were more sensitive than other classes. Grassland species had a lower probability of presence, though sample size was relatively small, but forest and shrubland species were more sensitive.Main conclusionsHabitat relationships were more important than life-history characteristics in predicting the effects of fragmentation. Habitat specialization increased sensitivity to fragmentation and interacted with class and habitat type; forest specialists and habitat-specific reptiles were particularly sensitive to fragmentation. Our results suggest that when conservationists are faced with disturbances that could fragment habitat they should pay particular attention to specialists, particularly reptiles. Further, our results highlight that the probability of presence in fragmented landscapes and true sensitivity to fragmentation are predicted by different factors.

  15. Screening instruments for a population of older adults: The 10-item Kessler Psychological Distress Scale (K10) and the 7-item Generalized Anxiety Disorder Scale (GAD-7).

    PubMed

    Vasiliadis, Helen-Maria; Chudzinski, Veronica; Gontijo-Guerra, Samantha; Préville, Michel

    2015-07-30

    Screening tools that appropriately detect older adults' mental disorders are of great public health importance. The present study aimed to establish cutoff scores for the 10-item Kessler Psychological Distress (K10) and the 7-item Generalized Anxiety Disorder (GAD-7) scales when screening for depression and anxiety. We used data from participants (n = 1811) in the Enquête sur la Santé des Aînés-Service study. Depression and anxiety were measured using DSM-V and DSM-IV criteria. Receiver operating characteristic (ROC) curve analysis provided an area under the curve (AUC) of 0.767 and 0.833 for minor and for major depression when using K10. A cutoff of 19 was found to balance sensitivity (0.794) and specificity (0.664) for minor depression, whereas a cutoff of 23 was found to balance sensitivity (0.692) and specificity (0.811) for major depression. When screening for an anxiety with GAD-7, ROC analysis yielded an AUC of 0.695; a cutoff of 5 was found to balance sensitivity (0.709) and specificity (0.568). No significant differences were found between subgroups of age and gender. Both K10 and GAD-7 were able to discriminate between cases and non-cases when screening for depression and anxiety in an older adult population of primary care service users. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Electrochemical Sensors for Clinic Analysis

    PubMed Central

    Wang, You; Xu, Hui; Zhang, Jianming; Li, Guang

    2008-01-01

    Demanded by modern medical diagnosis, advances in microfabrication technology have led to the development of fast, sensitive and selective electrochemical sensors for clinic analysis. This review addresses the principles behind electrochemical sensor design and fabrication, and introduces recent progress in the application of electrochemical sensors to analysis of clinical chemicals such as blood gases, electrolytes, metabolites, DNA and antibodies, including basic and applied research. Miniaturized commercial electrochemical biosensors will form the basis of inexpensive and easy to use devices for acquiring chemical information to bring sophisticated analytical capabilities to the non-specialist and general public alike in the future. PMID:27879810

  17. Interim reliability-evaluation program: analysis of the Browns Ferry, Unit 1, nuclear plant. Appendix C - sequence quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mays, S.E.; Poloski, J.P.; Sullivan, W.H.

    1982-07-01

    This report describes a risk study of the Browns Ferry, Unit 1, nuclear plant. The study is one of four such studies sponsored by the NRC Office of Research, Division of Risk Assessment, as part of its Interim Reliability Evaluation Program (IREP), Phase II. This report is contained in four volumes: a main report and three appendixes. Appendix C generally describes the methods used to estimate accident sequence frequency values. Information is presented concerning the approach, example collection, failure data, candidate dominant sequences, uncertainty analysis, and sensitivity analysis.

  18. Detection of chitinase activity by 2-aminobenzoic acid labeling of chito-oligosaccharides.

    PubMed

    Ghauharali-van der Vlugt, Karen; Bussink, Anton P; Groener, Johanna E M; Boot, Rolf G; Aerts, Johannes M F G

    2009-01-01

    Chitinases are hydrolases capable of hydrolyzing the abundant natural polysaccharide chitin. Next to artificial fluorescent substrates, more physiological chito-oligomers are commonly used in chitinase assays. Analysis of chito-oligosaccharides products is generally accomplished by UV detection. However, the relatively poor sensitivity poses a serious limitation. Here we report on a novel, much more sensitive assay for the detection of chito-oligosaccharide reaction products released by chitinases, based on fluorescent detection, following chemical labeling by 2-aminobenzoic acid. Comparison with existing UV-based assays, shows that the novel assay offers the same advantages yet allows detection of chito-oligosaccharides in the low picomolar range.

  19. [Meta-analysis of group comparison and meta-analysis of reliability generalization of the State-Trait Anxiety Inventory Questionnaire (STAI)].

    PubMed

    Guillén-Riquelme, Alejandro; Buela-Casal, Gualberto

    2014-01-01

    Since its creation the STAI has been cited in more than 14,000 documents, with more than 60 adaptations in different countries. In some adaptations this instrument has no clinical scores. The aim of this work is to determine if the State-Trait Anxiety Inventory (STAI) has higher scores in patients diagnosed with anxiety than in general population. In addition, we want to examine if the internal consistency is adequate in anxious patient samples. We performed a literature search in Tripdatabase, Cochrane, Web of Knowledge, Scopus, PyscINFO and Scholar Google, for documents published between 2008 y 2012. We selected 131 scientific articles to compare between patients diagnosed with anxiety and general population, and 25 for the generalization of reliability. For the analysis we used Cohen's d for means comparisons (random-effects method) and Cronbach's alpha for the reliability generalization (fixed-effects method). In the groups comparision the differences in state anxiety (d=1.39; CI95%: 1.22-1.56) and in the trait anxiety (d=1.74; CI95%:1.56-1.91) were significants. The reliability for patients of some anxiety disorder was between 0.87 and 0.93. So it seems that the STAI is sensitive to the level of anxiety of the individual and reliable for patients with diagnosis of panic attack, specific phobia, social phobia, generalized social phobia, generalized anxiety disorder, post-traumatic stress disorder, obsessive compulsive disorder or acute Stress disorder.

  20. Optimization of a cAMP response element signal pathway reporter system.

    PubMed

    Shan, Qiang; Storm, Daniel R

    2010-08-15

    A sensitive cAMP response element (CRE) reporter system is essential for studying the cAMP/protein kinase A/cAMP response element binding protein signal pathway. Here we have tested a few CRE promoters and found one with high sensitivity to external stimuli. Using this optimal CRE promoter and the enhanced green fluorescent protein as the reporter, we have established a CRE reporter cell line. This cell line can be used to study the signal pathway by fluorescent microscope, fluorescence-activated cell analysis and luciferase assay. This cell line's sensitivity to forskolin, using the technique of fluorescence-activated cell sorting, was increased to approximately seven times that of its parental HEK 293 cell line, which is currently the most commonly used cell line in the field for the signal pathway study. Therefore, this newly created cell line is potentially useful for studying the signal pathway's modulators, which generally have weaker effect than its mediators. Our research has also established a general procedure for optimizing transcription-based reporter cell lines, which might be useful in performing the same task when studying many other transcription-based signal pathways. (c) 2010 Elsevier B.V. All rights reserved.

  1. A New Heuristic Anonymization Technique for Privacy Preserved Datasets Publication on Cloud Computing

    NASA Astrophysics Data System (ADS)

    Aldeen Yousra, S.; Mazleena, Salleh

    2018-05-01

    Recent advancement in Information and Communication Technologies (ICT) demanded much of cloud services to sharing users’ private data. Data from various organizations are the vital information source for analysis and research. Generally, this sensitive or private data information involves medical, census, voter registration, social network, and customer services. Primary concern of cloud service providers in data publishing is to hide the sensitive information of individuals. One of the cloud services that fulfill the confidentiality concerns is Privacy Preserving Data Mining (PPDM). The PPDM service in Cloud Computing (CC) enables data publishing with minimized distortion and absolute privacy. In this method, datasets are anonymized via generalization to accomplish the privacy requirements. However, the well-known privacy preserving data mining technique called K-anonymity suffers from several limitations. To surmount those shortcomings, I propose a new heuristic anonymization framework for preserving the privacy of sensitive datasets when publishing on cloud. The advantages of K-anonymity, L-diversity and (α, k)-anonymity methods for efficient information utilization and privacy protection are emphasized. Experimental results revealed the superiority and outperformance of the developed technique than K-anonymity, L-diversity, and (α, k)-anonymity measure.

  2. Artificial neural network models for prediction of cardiovascular autonomic dysfunction in general Chinese population

    PubMed Central

    2013-01-01

    Background The present study aimed to develop an artificial neural network (ANN) based prediction model for cardiovascular autonomic (CA) dysfunction in the general population. Methods We analyzed a previous dataset based on a population sample consisted of 2,092 individuals aged 30–80 years. The prediction models were derived from an exploratory set using ANN analysis. Performances of these prediction models were evaluated in the validation set. Results Univariate analysis indicated that 14 risk factors showed statistically significant association with CA dysfunction (P < 0.05). The mean area under the receiver-operating curve was 0.762 (95% CI 0.732–0.793) for prediction model developed using ANN analysis. The mean sensitivity, specificity, positive and negative predictive values were similar in the prediction models was 0.751, 0.665, 0.330 and 0.924, respectively. All HL statistics were less than 15.0. Conclusion ANN is an effective tool for developing prediction models with high value for predicting CA dysfunction among the general population. PMID:23902963

  3. Uncertainty Analysis of the Grazing Flow Impedance Tube

    NASA Technical Reports Server (NTRS)

    Brown, Martha C.; Jones, Michael G.; Watson, Willie R.

    2012-01-01

    This paper outlines a methodology to identify the measurement uncertainty of NASA Langley s Grazing Flow Impedance Tube (GFIT) over its operating range, and to identify the parameters that most significantly contribute to the acoustic impedance prediction. Two acoustic liners are used for this study. The first is a single-layer, perforate-over-honeycomb liner that is nonlinear with respect to sound pressure level. The second consists of a wire-mesh facesheet and a honeycomb core, and is linear with respect to sound pressure level. These liners allow for evaluation of the effects of measurement uncertainty on impedances educed with linear and nonlinear liners. In general, the measurement uncertainty is observed to be larger for the nonlinear liners, with the largest uncertainty occurring near anti-resonance. A sensitivity analysis of the aerodynamic parameters (Mach number, static temperature, and static pressure) used in the impedance eduction process is also conducted using a Monte-Carlo approach. This sensitivity analysis demonstrates that the impedance eduction process is virtually insensitive to each of these parameters.

  4. Shape optimization of three-dimensional stamped and solid automotive components

    NASA Technical Reports Server (NTRS)

    Botkin, M. E.; Yang, R.-J.; Bennett, J. A.

    1987-01-01

    The shape optimization of realistic, 3-D automotive components is discussed. The integration of the major parts of the total process: modeling, mesh generation, finite element and sensitivity analysis, and optimization are stressed. Stamped components and solid components are treated separately. For stamped parts a highly automated capability was developed. The problem description is based upon a parameterized boundary design element concept for the definition of the geometry. Automatic triangulation and adaptive mesh refinement are used to provide an automated analysis capability which requires only boundary data and takes into account sensitivity of the solution accuracy to boundary shape. For solid components a general extension of the 2-D boundary design element concept has not been achieved. In this case, the parameterized surface shape is provided using a generic modeling concept based upon isoparametric mapping patches which also serves as the mesh generator. Emphasis is placed upon the coupling of optimization with a commercially available finite element program. To do this it is necessary to modularize the program architecture and obtain shape design sensitivities using the material derivative approach so that only boundary solution data is needed.

  5. Factor Structure, Internal Consistency, and Screening Sensitivity of the GARS-2 in a Developmental Disabilities Sample

    PubMed Central

    Volker, Martin A.; Dua, Elissa H.; Lopata, Christopher; Thomeer, Marcus L.; Toomey, Jennifer A.; Smerbeck, Audrey M.; Rodgers, Jonathan D.; Popkin, Joshua R.; Nelson, Andrew T.; Lee, Gloria K.

    2016-01-01

    The Gilliam Autism Rating Scale-Second Edition (GARS-2) is a widely used screening instrument that assists in the identification and diagnosis of autism. The purpose of this study was to examine the factor structure, internal consistency, and screening sensitivity of the GARS-2 using ratings from special education teaching staff for a sample of 240 individuals with autism or other significant developmental disabilities. Exploratory factor analysis yielded a correlated three-factor solution similar to that found in 2005 by Lecavalier for the original GARS. Though the three factors appeared to be reasonably consistent with the intended constructs of the three GARS-2 subscales, the analysis indicated that more than a third of the GARS-2 items were assigned to the wrong subscale. Internal consistency estimates met or exceeded standards for screening and were generally higher than those in previous studies. Screening sensitivity was .65 and specificity was .81 for the Autism Index using a cut score of 85. Based on these findings, recommendations are made for instrument revision. PMID:26981279

  6. High order statistical signatures from source-driven measurements of subcritical fissile systems

    NASA Astrophysics Data System (ADS)

    Mattingly, John Kelly

    1998-11-01

    This research focuses on the development and application of high order statistical analyses applied to measurements performed with subcritical fissile systems driven by an introduced neutron source. The signatures presented are derived from counting statistics of the introduced source and radiation detectors that observe the response of the fissile system. It is demonstrated that successively higher order counting statistics possess progressively higher sensitivity to reactivity. Consequently, these signatures are more sensitive to changes in the composition, fissile mass, and configuration of the fissile assembly. Furthermore, it is shown that these techniques are capable of distinguishing the response of the fissile system to the introduced source from its response to any internal or inherent sources. This ability combined with the enhanced sensitivity of higher order signatures indicates that these techniques will be of significant utility in a variety of applications. Potential applications include enhanced radiation signature identification of weapons components for nuclear disarmament and safeguards applications and augmented nondestructive analysis of spent nuclear fuel. In general, these techniques expand present capabilities in the analysis of subcritical measurements.

  7. Evaluation of microarray data normalization procedures using spike-in experiments

    PubMed Central

    Rydén, Patrik; Andersson, Henrik; Landfors, Mattias; Näslund, Linda; Hartmanová, Blanka; Noppa, Laila; Sjöstedt, Anders

    2006-01-01

    Background Recently, a large number of methods for the analysis of microarray data have been proposed but there are few comparisons of their relative performances. By using so-called spike-in experiments, it is possible to characterize the analyzed data and thereby enable comparisons of different analysis methods. Results A spike-in experiment using eight in-house produced arrays was used to evaluate established and novel methods for filtration, background adjustment, scanning, channel adjustment, and censoring. The S-plus package EDMA, a stand-alone tool providing characterization of analyzed cDNA-microarray data obtained from spike-in experiments, was developed and used to evaluate 252 normalization methods. For all analyses, the sensitivities at low false positive rates were observed together with estimates of the overall bias and the standard deviation. In general, there was a trade-off between the ability of the analyses to identify differentially expressed genes (i.e. the analyses' sensitivities) and their ability to provide unbiased estimators of the desired ratios. Virtually all analysis underestimated the magnitude of the regulations; often less than 50% of the true regulations were observed. Moreover, the bias depended on the underlying mRNA-concentration; low concentration resulted in high bias. Many of the analyses had relatively low sensitivities, but analyses that used either the constrained model (i.e. a procedure that combines data from several scans) or partial filtration (a novel method for treating data from so-called not-found spots) had with few exceptions high sensitivities. These methods gave considerable higher sensitivities than some commonly used analysis methods. Conclusion The use of spike-in experiments is a powerful approach for evaluating microarray preprocessing procedures. Analyzed data are characterized by properties of the observed log-ratios and the analysis' ability to detect differentially expressed genes. If bias is not a major problem; we recommend the use of either the CM-procedure or partial filtration. PMID:16774679

  8. CONTINUOUS-ENERGY MONTE CARLO METHODS FOR CALCULATING GENERALIZED RESPONSE SENSITIVITIES USING TSUNAMI-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2014-01-01

    This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.

  9. Finite Element Based Structural Damage Detection Using Artificial Boundary Conditions

    DTIC Science & Technology

    2007-09-01

    C. (2005). Elementary Linear Algebra . New York: John Wiley and Sons. Avitable, Peter (2001, January) Experimental Modal Analysis, A Simple Non...variables under consideration. 3 Frequency sensitivities are the basis for a linear approximation to compute the change in the natural frequencies of a...THEORY The general problem statement for a non- linear constrained optimization problem is: To minimize ( )f x Objective Function Subject to

  10. Proteomic analysis of B-aminobutyric acid priming and aba-induction of drought resistance in crabapple (Malus pumila): effect on general metabolism, the phenylpropanoid pathway and cell wall enzymes

    USDA-ARS?s Scientific Manuscript database

    In a variety of annual crops and model plants, the xenobiotic compound, DL-beta-aminobutyric acid (BABA), has been shown to enhance disease resistance and increase salt, drought, and thermotolerance. BABA does not activate stress genes directly but rather sensitizes plants to respond more quickly a...

  11. Atmospheric model development in support of SEASAT. Volume 1: Summary of findings

    NASA Technical Reports Server (NTRS)

    Kesel, P. G.

    1977-01-01

    Atmospheric analysis and prediction models of varying (grid) resolution were developed. The models were tested using real observational data for the purpose of assessing the impact of grid resolution on short range numerical weather prediction. The discretionary model procedures were examined so that the computational viability of SEASAT data might be enhanced during the conduct of (future) sensitivity tests. The analysis effort covers: (1) examining the procedures for allowing data to influence the analysis; (2) examining the effects of varying the weights in the analysis procedure; (3) testing and implementing procedures for solving the minimization equation in an optimal way; (4) describing the impact of grid resolution on analysis; and (5) devising and implementing numerous practical solutions to analysis problems, generally.

  12. Cost-effectiveness analysis of HPV vaccination: comparing the general population with socially vulnerable individuals.

    PubMed

    Han, Kyu-Tae; Kim, Sun Jung; Lee, Seo Yoon; Park, Eun-Cheol

    2014-01-01

    After the WHO recommended HPV vaccination of the general population in 2009, government support of HPV vaccination programs was increased in many countries. However, this policy was not implemented in Korea due to perceived low cost-effectiveness. Thus, the aim of this study was to analyze the cost-utility of HPV vaccination programs targeted to high risk populations as compared to vaccination programs for the general population. Each study population was set to 100,000 people in a simulation study to determine the incremental cost-utility ratio (ICUR), then standard prevalence rates, cost, vaccination rates, vaccine efficacy, and the Quality-Adjusted Life-Years (QALYs) were applied to the analysis. In addition, sensitivity analysis was performed by assuming discounted vaccination cost. In the socially vulnerable population, QALYs gained through HPV vaccination were higher than that of the general population (General population: 1,019, Socially vulnerable population: 5,582). The results of ICUR showed that the cost of HPV vaccination was higher for the general population than the socially vulnerable population. (General population: 52,279,255 KRW, Socially vulnerable population: 9,547,347 KRW). Compared with 24 million KRW/QALYs as the social threshold, vaccination of the general population was not cost-effective. In contrast, vaccination of the socially vulnerable population was strongly cost-effective. The results suggest the importance and necessity of government support of HPV vaccination programs targeted to socially vulnerable populations because a targeted approach is much more cost-effective. The implementation of government support for such vaccination programs is a critical strategy for decreasing the burden of HPV infection in Korea.

  13. Left Ventricular Hypertrophy: An allometric comparative analysis of different ECG markers

    NASA Astrophysics Data System (ADS)

    Bonomini, M. P.; Ingallina, F.; Barone, V.; Valentinuzzi, M. E.; Arini, P. D.

    2011-12-01

    Allometry, in general biology, measures the relative growth of a part in relation to the whole living organism. Left ventricular hypertrophy (LVH) is the heart adaptation to excessive load (systolic or diastolic). The increase in left ventricular mass leads to an increase in the electrocardiographic voltages. Based on clinical data, we compared the allometric behavior of three different ECG markers of LVH. To do this, the allometric fit AECG = δ + β (VM) relating left ventricular mass (estimated from ecocardiographic data) and ECG amplitudes (expressed as the Cornell-Voltage, Sokolow and the ECG overall voltage indexes) were compared. Besides, sensitivity and specifity for each index were analyzed. The more sensitive the ECG criteria, the better the allometric fit. In conclusion: The allometric paradigm should be regarded as the way to design new and more sensitive ECG-based LVH markers.

  14. Uncertainty and sensitivity analysis for two-phase flow in the vicinity of the repository in the 1996 performance assessment for the Waste Isolation Pilot Plant: Disturbed conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HELTON,JON CRAIG; BEAN,J.E.; ECONOMY,K.

    2000-05-22

    Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP) are presented for two-phase flow in the vicinity of the repository under disturbed conditions resulting from drilling intrusions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformations are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure and brine flow from the repository to the Culebra Dolomite are potentially the most important in PA for the WIPP. Subsequentmore » to a drilling intrusion repository pressure was dominated by borehole permeability and generally below the level (i.e., 8 MPa) that could potentially produce spallings and direct brine releases. Brine flow from the repository to the Culebra Dolomite tended to be small or nonexistent with its occurrence and size also dominated by borehole permeability.« less

  15. Relevance of gender-sensitive policies and general health indicators to compare the status of South Asian women's health.

    PubMed

    Gill, Roopan; Stewart, Donna E

    2011-01-01

    despite goals for gender equity in South Asia, the relationship between gender-sensitive policies and the empowerment of women is complex and requires an analysis of how policies align with a broad set of social, cultural, political, and economic indicators that relate to women's health. through a review of four documents under the umbrella of the World Health Organization and the United Nations, a list of 17 gender-sensitive policy and 17 general health indicators was generated with a focus on health, education, economic, and political empowerment and violence against women. A series of policy documents and international and national databases that are accessible in the public domain were the major tools used to find supporting documentation to address women's health outcomes in Bangladesh, India, Nepal, Pakistan, and Sri Lanka. all five South Asian countries had several gender-sensitive policies that were measurable by indicators that contribute to health. Examination of political and economic status, birth sex ratios, human trafficking, illiteracy rates, maternal mortality rates, contraception prevalence, fertility rates, knowledge of HIV/AIDS prevention, access to skilled birth attendants, and microfinance show that large gender inequities still prevail despite the presence of gender-sensitive policies. in many cases, the presence of gender-sensitive policies did not reflect the realization of gender equity over a wide range of indicators. Although the economic, political, social, and cultural climates of the five countries may differ, the integration of women's needs into the formulation, implementation, and monitoring of policies is a universal necessity to achieve positive outcomes. 2011 Jacobs Institute of Women's Health. Published by Elsevier Inc.

  16. Modeling screening, prevention, and delaying of Alzheimer's disease: an early-stage decision analytic model

    PubMed Central

    2010-01-01

    Background Alzheimer's Disease (AD) affects a growing proportion of the population each year. Novel therapies on the horizon may slow the progress of AD symptoms and avoid cases altogether. Initiating treatment for the underlying pathology of AD would ideally be based on biomarker screening tools identifying pre-symptomatic individuals. Early-stage modeling provides estimates of potential outcomes and informs policy development. Methods A time-to-event (TTE) simulation provided estimates of screening asymptomatic patients in the general population age ≥55 and treatment impact on the number of patients reaching AD. Patients were followed from AD screen until all-cause death. Baseline sensitivity and specificity were 0.87 and 0.78, with treatment on positive screen. Treatment slowed progression by 50%. Events were scheduled using literature-based age-dependent incidences of AD and death. Results The base case results indicated increased AD free years (AD-FYs) through delays in onset and a reduction of 20 AD cases per 1000 screened individuals. Patients completely avoiding AD accounted for 61% of the incremental AD-FYs gained. Total years of treatment per 1000 screened patients was 2,611. The number-needed-to-screen was 51 and the number-needed-to-treat was 12 to avoid one case of AD. One-way sensitivity analysis indicated that duration of screening sensitivity and rescreen interval impact AD-FYs the most. A two-way sensitivity analysis found that for a test with an extended duration of sensitivity (15 years) the number of AD cases avoided was 6,000-7,000 cases for a test with higher sensitivity and specificity (0.90,0.90). Conclusions This study yielded valuable parameter range estimates at an early stage in the study of screening for AD. Analysis identified duration of screening sensitivity as a key variable that may be unavailable from clinical trials. PMID:20433705

  17. Modeling screening, prevention, and delaying of Alzheimer's disease: an early-stage decision analytic model.

    PubMed

    Furiak, Nicolas M; Klein, Robert W; Kahle-Wrobleski, Kristin; Siemers, Eric R; Sarpong, Eric; Klein, Timothy M

    2010-04-30

    Alzheimer's Disease (AD) affects a growing proportion of the population each year. Novel therapies on the horizon may slow the progress of AD symptoms and avoid cases altogether. Initiating treatment for the underlying pathology of AD would ideally be based on biomarker screening tools identifying pre-symptomatic individuals. Early-stage modeling provides estimates of potential outcomes and informs policy development. A time-to-event (TTE) simulation provided estimates of screening asymptomatic patients in the general population age > or =55 and treatment impact on the number of patients reaching AD. Patients were followed from AD screen until all-cause death. Baseline sensitivity and specificity were 0.87 and 0.78, with treatment on positive screen. Treatment slowed progression by 50%. Events were scheduled using literature-based age-dependent incidences of AD and death. The base case results indicated increased AD free years (AD-FYs) through delays in onset and a reduction of 20 AD cases per 1000 screened individuals. Patients completely avoiding AD accounted for 61% of the incremental AD-FYs gained. Total years of treatment per 1000 screened patients was 2,611. The number-needed-to-screen was 51 and the number-needed-to-treat was 12 to avoid one case of AD. One-way sensitivity analysis indicated that duration of screening sensitivity and rescreen interval impact AD-FYs the most. A two-way sensitivity analysis found that for a test with an extended duration of sensitivity (15 years) the number of AD cases avoided was 6,000-7,000 cases for a test with higher sensitivity and specificity (0.90,0.90). This study yielded valuable parameter range estimates at an early stage in the study of screening for AD. Analysis identified duration of screening sensitivity as a key variable that may be unavailable from clinical trials.

  18. Maternal mind-mindedness during infancy, general parenting sensitivity and observed child feeding behavior: a longitudinal study.

    PubMed

    Farrow, Claire; Blissett, Jackie

    2014-01-01

    Maternal mind-mindedness, or the tendency to view the child as a mental agent, has been shown to predict sensitive and responsive parenting behavior. As yet the role of mind-mindedness has not been explored in the context of feeding interactions. This study evaluates the relations between maternal mind-mindedness at 6 months of infant age and subsequently observed maternal sensitivity and feeding behaviors with children at age 1 year. Maternal mind-mindedness was greater in mothers who had breast-fed compared to formula-fed. Controlling for breast-feeding, mind-mindedness at 6 months was correlated with observations of more sensitive and positive feeding behaviors at 1 year of age. Mind-mindedness was also associated with greater general maternal sensitivity in play and this general parenting sensitivity mediated the effect of mind-mindedness on more sensitive and positive feeding behaviors. Interventions to promote maternal tendency to consider their child's mental states may encourage more adaptive parental feeding behaviors.

  19. Food Approach and Food Avoidance in Young Children: Relation with Reward Sensitivity and Punishment Sensitivity

    PubMed Central

    Vandeweghe, Laura; Vervoort, Leentje; Verbeken, Sandra; Moens, Ellen; Braet, Caroline

    2016-01-01

    It has recently been suggested that individual differences in Reward Sensitivity and Punishment Sensitivity may determine how children respond to food. These temperamental traits reflect activity in two basic brain systems that respond to rewarding and punishing stimuli, respectively, with approach and avoidance. Via parent-report questionnaires, we investigate the associations of the general motivational temperamental traits Reward Sensitivity and Punishment Sensitivity with Food Approach and Food Avoidance in 98 preschool children. Consistent with the conceptualization of Reward Sensitivity in terms of approach behavior and Punishment Sensitivity in terms of avoidance behavior, Reward Sensitivity was positively related to Food Approach, while Punishment Sensitivity was positively related to Food Avoidance. Future research should integrate these perspectives (i.e., general temperamental traits Reward Sensitivity and Punishment Sensitivity, and Food Approach and Avoidance) to get a better understanding of eating behavior and related body weight. PMID:27445898

  20. Systematic analysis of Ca2+ homeostasis in Saccharomyces cerevisiae based on chemical-genetic interaction profiles

    PubMed Central

    Ghanegolmohammadi, Farzan; Yoshida, Mitsunori; Ohnuki, Shinsuke; Sukegawa, Yuko; Okada, Hiroki; Obara, Keisuke; Kihara, Akio; Suzuki, Kuninori; Kojima, Tetsuya; Yachie, Nozomu; Hirata, Dai; Ohya, Yoshikazu

    2017-01-01

    We investigated the global landscape of Ca2+ homeostasis in budding yeast based on high-dimensional chemical-genetic interaction profiles. The morphological responses of 62 Ca2+-sensitive (cls) mutants were quantitatively analyzed with the image processing program CalMorph after exposure to a high concentration of Ca2+. After a generalized linear model was applied, an analysis of covariance model was used to detect significant Ca2+–cls interactions. We found that high-dimensional, morphological Ca2+–cls interactions were mixed with positive (86%) and negative (14%) chemical-genetic interactions, whereas one-dimensional fitness Ca2+–cls interactions were all negative in principle. Clustering analysis with the interaction profiles revealed nine distinct gene groups, six of which were functionally associated. In addition, characterization of Ca2+–cls interactions revealed that morphology-based negative interactions are unique signatures of sensitized cellular processes and pathways. Principal component analysis was used to discriminate between suppression and enhancement of the Ca2+-sensitive phenotypes triggered by inactivation of calcineurin, a Ca2+-dependent phosphatase. Finally, similarity of the interaction profiles was used to reveal a connected network among the Ca2+ homeostasis units acting in different cellular compartments. Our analyses of high-dimensional chemical-genetic interaction profiles provide novel insights into the intracellular network of yeast Ca2+ homeostasis. PMID:28566553

  1. Highly sensitive index of sympathetic activity based on time-frequency spectral analysis of electrodermal activity.

    PubMed

    Posada-Quintero, Hugo F; Florian, John P; Orjuela-Cañón, Álvaro D; Chon, Ki H

    2016-09-01

    Time-domain indices of electrodermal activity (EDA) have been used as a marker of sympathetic tone. However, they often show high variation between subjects and low consistency, which has precluded their general use as a marker of sympathetic tone. To examine whether power spectral density analysis of EDA can provide more consistent results, we recently performed a variety of sympathetic tone-evoking experiments (43). We found significant increase in the spectral power in the frequency range of 0.045 to 0.25 Hz when sympathetic tone-evoking stimuli were induced. The sympathetic tone assessed by the power spectral density of EDA was found to have lower variation and more sensitivity for certain, but not all, stimuli compared with the time-domain analysis of EDA. We surmise that this lack of sensitivity in certain sympathetic tone-inducing conditions with time-invariant spectral analysis of EDA may lie in its inability to characterize time-varying dynamics of the sympathetic tone. To overcome the disadvantages of time-domain and time-invariant power spectral indices of EDA, we developed a highly sensitive index of sympathetic tone, based on time-frequency analysis of EDA signals. Its efficacy was tested using experiments designed to elicit sympathetic dynamics. Twelve subjects underwent four tests known to elicit sympathetic tone arousal: cold pressor, tilt table, stand test, and the Stroop task. We hypothesize that a more sensitive measure of sympathetic control can be developed using time-varying spectral analysis. Variable frequency complex demodulation, a recently developed technique for time-frequency analysis, was used to obtain spectral amplitudes associated with EDA. We found that the time-varying spectral frequency band 0.08-0.24 Hz was most responsive to stimulation. Spectral power for frequencies higher than 0.24 Hz were determined to be not related to the sympathetic dynamics because they comprised less than 5% of the total power. The mean value of time-varying spectral amplitudes in the frequency band 0.08-0.24 Hz were used as the index of sympathetic tone, termed TVSymp. TVSymp was found to be overall the most sensitive to the stimuli, as evidenced by a low coefficient of variation (0.54), and higher consistency (intra-class correlation, 0.96) and sensitivity (Youden's index > 0.75), area under the receiver operating characteristic (ROC) curve (>0.8, accuracy > 0.88) compared with time-domain and time-invariant spectral indices, including heart rate variability. Copyright © 2016 the American Physiological Society.

  2. Sensitivity of charge transport measurements to local inhomogeneities

    NASA Astrophysics Data System (ADS)

    Koon, Daniel; Wang, Fei; Hjorth Petersen, Dirch; Hansen, Ole

    2012-02-01

    We derive analytic expressions for the sensitivity of resistive and Hall measurements to local variations in a specimen's material properties in the combined linear limit of both small magnetic fields and small perturbations, presenting exact, algebraic expressions both for four-point probe measurements on an infinite plane and for symmetric, circular van der Pauw discs. We then generalize the results to obtain corrections to the sensitivities both for finite magnetic fields and for finite perturbations. Calculated functions match published results and computer simulations, and provide an intuitive, visual explanation for experimental misassignment of carrier type in n-type ZnO and agree with published experimental results for holes in a uniform material. These results simplify calculation and plotting of the sensitivities on an NxN grid from a problem of order N^5 to one of order N^3 in the arbitrary case and of order N^2 in the handful of cases that can be solved exactly, putting a powerful tool for inhomogeneity analysis in the hands of the researcher: calculation of the sensitivities requires little more than the solution of Laplace's equation on the specimen geometry.

  3. Clinical Utility of Acetylcholine Receptor Antibody Testing in Ocular Myasthenia Gravis.

    PubMed

    Peeler, Crandall E; De Lott, Lindsey B; Nagia, Lina; Lemos, Joao; Eggenberger, Eric R; Cornblath, Wayne T

    2015-10-01

    The sensitivity of acetylcholine receptor (AChR) antibody testing is thought to be lower in ocular myasthenia gravis (OMG) compared with generalized disease, although estimates in small-scale studies vary. There is little information in the literature about the implications of AChR antibody levels and progression from OMG to generalized myasthenia gravis. To test the hypothesis that serum AChR antibody testing is more sensitive in OMG than previously reported and to examine the association between AChR antibody levels and progression from OMG to generalized myasthenia gravis. A retrospective, observational cohort study was conducted of 223 patients (mean [SD] age, 59.2 [16.4] years; 139 [62.3%] male) diagnosed with OMG between July 1, 1986, and May 31, 2013, at 2 large, academic medical centers. Baseline characteristics, OMG symptoms, results of AChR antibody testing, and progression time to generalized myasthenia gravis (if this occurred) were recorded for each patient. Multiple logistic regression was used to measure the association between all clinical variables and antibody result. Kaplan-Meier survival analysis was performed to examine time to generalization. Among the 223 participants, AChR antibody testing results were positive in 158 participants (70.9%). In an adjusted model, increased age at diagnosis (odds ratio [OR], 1.03; 95% CI, 1.01-1.04; P = .007) and progression to generalized myasthenia gravis (OR, 2.92; 95% CI, 1.18-7.26; P = .02) were significantly associated with positive antibody test results. Women were less likely to have a positive antibody test result (OR, 0.36; 95% CI, 0.19-0.68; P = .002). Patients who developed symptoms of generalized myasthenia gravis had a significantly higher mean (SD) antibody level than those who did not develop symptoms of generalized myasthenia gravis (12.7 [16.5] nmol/L vs 4.2 [7.9] nmol/L; P = .002). We demonstrate a higher sensitivity of AChR antibody testing than previously reported in the largest cohort of patients with OMG available to date. Older age, male sex, and progression to generalized myasthenia gravis were significantly associated with a positive antibody test result. In addition, to our knowledge, this is the first report of an association between high AChR antibody levels and progression from OMG to generalized disease.

  4. Methane negative chemical ionization analysis of 1,3-dihydro-5-phenyl-1,4-benzodiazepin-2-ones.

    PubMed Central

    Garland, W A; Miwa, B J

    1980-01-01

    The methane negative chemical ionization (NCI) mass spectra of the medically important 1,3-dihydro-5-phenyl-1,4-benzodiazepin-2-ones generally consisted solely of M- and (M-H)- ions. Attempts to find the location of the H lost in the generation of the (M-H)- ion were unsuccessful, although many possibilities were eliminated. A Hammett correlation analysis of the relative sensitivities of a series of 7-substituted benzodiazepines suggested that the initial ionization takes place at the 4,5-imine bond. For certain benzodiazepines, the (M-H)- ion generated by methane NCI was 20 times more intense than the MH+ ion generated by methane positive chemical ionization (PCI). By using NCI, a sensitive and simple GC-MS assay for nordiazepam was developed that can quantitate this important metabolite of many of the clinically used benzodiazepines in the blood and brain of rats. PMID:6775944

  5. Reproducibility of EEG-fMRI results in a patient with fixation-off sensitivity.

    PubMed

    Formaggio, Emanuela; Storti, Silvia Francesca; Galazzo, Ilaria Boscolo; Bongiovanni, Luigi Giuseppe; Cerini, Roberto; Fiaschi, Antonio; Manganotti, Paolo

    2014-07-01

    Blood oxygenation level-dependent (BOLD) activation associated with interictal epileptiform discharges in a patient with fixation-off sensitivity (FOS) was studied using a combined electroencephalography-functional magnetic resonance imaging (EEG-fMRI) technique. An automatic approach for combined EEG-fMRI analysis and a subject-specific hemodynamic response function was used to improve general linear model analysis of the fMRI data. The EEG showed the typical features of FOS, with continuous epileptiform discharges during elimination of central vision by eye opening and closing and fixation; modification of this pattern was clearly visible and recognizable. During all 3 recording sessions EEG-fMRI activations indicated a BOLD signal decrease related to epileptiform activity in the parietal areas. This study can further our understanding of this EEG phenomenon and can provide some insight into the reliability of the EEG-fMRI technique in localizing the irritative zone.

  6. Accuracy of MRI for the diagnosis of metastatic cervical lymphadenopathy in patients with thyroid cancer.

    PubMed

    Chen, Qinghua; Raghavan, Prashant; Mukherjee, Sugoto; Jameson, Mark J; Patrie, James; Xin, Wenjun; Xian, Junfang; Wang, Zhenchang; Levine, Paul A; Wintermark, Max

    2015-10-01

    The aim of this study was to systematically compare a comprehensive array of magnetic resonance (MR) imaging features in terms of their sensitivity and specificity to diagnose cervical lymph node metastases in patients with thyroid cancer. The study included 41 patients with thyroid malignancy who underwent surgical excision of cervical lymph nodes and had preoperative MR imaging ≤4weeks prior to surgery. Three head and neck neuroradiologists independently evaluated all the MR images. Using the pathology results as reference, the sensitivity, specificity and interobserver agreement of each MR imaging characteristic were calculated. On multivariate analysis, no single imaging feature was significantly correlated with metastasis. In general, imaging features demonstrated high specificity, but poor sensitivity and moderate interobserver agreement at best. Commonly used MR imaging features have limited sensitivity at correctly identifying cervical lymph node metastases in patients with thyroid cancer. A negative neck MR scan should not dissuade a surgeon from performing a neck dissection in patients with thyroid carcinomas.

  7. Proteomic Signatures of the Zebrafish (Danio rerio) Embryo: Sensitivity and Specificity in Toxicity Assessment of Chemicals.

    PubMed

    Hanisch, Karen; Küster, Eberhard; Altenburger, Rolf; Gündel, Ulrike

    2010-01-01

    Studies using embryos of the zebrafish Danio rerio (DarT) instead of adult fish for characterising the (eco-) toxic potential of chemicals have been proposed as animal replacing methods. Effect analysis at the molecular level might enhance sensitivity, specificity, and predictive value of the embryonal studies. The present paper aimed to test the potential of toxicoproteomics with zebrafish eleutheroembryos for sensitive and specific toxicity assessment. 2-DE-based toxicoproteomics was performed applying low-dose (EC(10)) exposure for 48 h with three-model substances Rotenone, 4,6-dinitro-o-cresol (DNOC) and Diclofenac. By multivariate "pattern-only" PCA and univariate statistical analyses, alterations in the embryonal proteome were detectable in nonetheless visibly intact organisms and treatment with the three substances was distinguishable at the molecular level. Toxicoproteomics enabled the enhancement of sensitivity and specificity of the embryonal toxicity assay and bear the potency to identify protein markers serving as general stress markers and early diagnosis of toxic stress.

  8. Proteomic Signatures of the Zebrafish (Danio rerio) Embryo: Sensitivity and Specificity in Toxicity Assessment of Chemicals

    PubMed Central

    Hanisch, Karen; Küster, Eberhard; Altenburger, Rolf; Gündel, Ulrike

    2010-01-01

    Studies using embryos of the zebrafish Danio rerio (DarT) instead of adult fish for characterising the (eco-) toxic potential of chemicals have been proposed as animal replacing methods. Effect analysis at the molecular level might enhance sensitivity, specificity, and predictive value of the embryonal studies. The present paper aimed to test the potential of toxicoproteomics with zebrafish eleutheroembryos for sensitive and specific toxicity assessment. 2-DE-based toxicoproteomics was performed applying low-dose (EC10) exposure for 48 h with three-model substances Rotenone, 4,6-dinitro-o-cresol (DNOC) and Diclofenac. By multivariate “pattern-only” PCA and univariate statistical analyses, alterations in the embryonal proteome were detectable in nonetheless visibly intact organisms and treatment with the three substances was distinguishable at the molecular level. Toxicoproteomics enabled the enhancement of sensitivity and specificity of the embryonal toxicity assay and bear the potency to identify protein markers serving as general stress markers and early diagnosis of toxic stress. PMID:22084678

  9. Correlation of finite-element structural dynamic analysis with measured free vibration characteristics for a full-scale helicopter fuselage

    NASA Technical Reports Server (NTRS)

    Kenigsberg, I. J.; Dean, M. W.; Malatino, R.

    1974-01-01

    The correlation achieved with each program provides the material for a discussion of modeling techniques developed for general application to finite-element dynamic analyses of helicopter airframes. Included are the selection of static and dynamic degrees of freedom, cockpit structural modeling, and the extent of flexible-frame modeling in the transmission support region and in the vicinity of large cut-outs. The sensitivity of predicted results to these modeling assumptions are discussed. Both the Sikorsky Finite-Element Airframe Vibration analysis Program (FRAN/Vibration Analysis) and the NASA Structural Analysis Program (NASTRAN) have been correlated with data taken in full-scale vibration tests of a modified CH-53A helicopter.

  10. Probabilistic structural analysis using a general purpose finite element program

    NASA Astrophysics Data System (ADS)

    Riha, D. S.; Millwater, H. R.; Thacker, B. H.

    1992-07-01

    This paper presents an accurate and efficient method to predict the probabilistic response for structural response quantities, such as stress, displacement, natural frequencies, and buckling loads, by combining the capabilities of MSC/NASTRAN, including design sensitivity analysis and fast probability integration. Two probabilistic structural analysis examples have been performed and verified by comparison with Monte Carlo simulation of the analytical solution. The first example consists of a cantilevered plate with several point loads. The second example is a probabilistic buckling analysis of a simply supported composite plate under in-plane loading. The coupling of MSC/NASTRAN and fast probability integration is shown to be orders of magnitude more efficient than Monte Carlo simulation with excellent accuracy.

  11. Digression and Value Concatenation to Enable Privacy-Preserving Regression.

    PubMed

    Li, Xiao-Bai; Sarkar, Sumit

    2014-09-01

    Regression techniques can be used not only for legitimate data analysis, but also to infer private information about individuals. In this paper, we demonstrate that regression trees, a popular data-analysis and data-mining technique, can be used to effectively reveal individuals' sensitive data. This problem, which we call a "regression attack," has not been addressed in the data privacy literature, and existing privacy-preserving techniques are not appropriate in coping with this problem. We propose a new approach to counter regression attacks. To protect against privacy disclosure, our approach introduces a novel measure, called digression , which assesses the sensitive value disclosure risk in the process of building a regression tree model. Specifically, we develop an algorithm that uses the measure for pruning the tree to limit disclosure of sensitive data. We also propose a dynamic value-concatenation method for anonymizing data, which better preserves data utility than a user-defined generalization scheme commonly used in existing approaches. Our approach can be used for anonymizing both numeric and categorical data. An experimental study is conducted using real-world financial, economic and healthcare data. The results of the experiments demonstrate that the proposed approach is very effective in protecting data privacy while preserving data quality for research and analysis.

  12. Adaptation of an urban land surface model to a tropical suburban area: Offline evaluation, sensitivity analysis, and optimization of TEB/ISBA (SURFEX)

    NASA Astrophysics Data System (ADS)

    Harshan, Suraj

    The main objective of the present thesis is the improvement of the TEB/ISBA (SURFEX) urban land surface model (ULSM) through comprehensive evaluation, sensitivity analysis, and optimization experiments using energy balance and radiative and air temperature data observed during 11 months at a tropical sub-urban site in Singapore. Overall the performance of the model is satisfactory, with a small underestimation of net radiation and an overestimation of sensible heat flux. Weaknesses in predicting the latent heat flux are apparent with smaller model values during daytime and the model also significantly underpredicts both the daytime peak and nighttime storage heat. Surface temperatures of all facets are generally overpredicted. Significant variation exists in the model behaviour between dry and wet seasons. The vegetation parametrization used in the model is inadequate to represent the moisture dynamics, producing unrealistically low latent heat fluxes during a particularly dry period. The comprehensive evaluation of the USLM shows the need for accurate estimation of input parameter values for present site. Since obtaining many of these parameters through empirical methods is not feasible, the present study employed a two step approach aimed at providing information about the most sensitive parameters and an optimized parameter set from model calibration. Two well established sensitivity analysis methods (global: Sobol and local: Morris) and a state-of-the-art multiobjective evolutionary algorithm (Borg) were employed for sensitivity analysis and parameter estimation. Experiments were carried out for three different weather periods. The analysis indicates that roof related parameters are the most important ones in controlling the behaviour of the sensible heat flux and net radiation flux, with roof and road albedo as the most influential parameters. Soil moisture initialization parameters are important in controlling the latent heat flux. The built (town) fraction has a significant influence on all fluxes considered. Comparison between the Sobol and Morris methods shows similar sensitivities, indicating the robustness of the present analysis and that the Morris method can be employed as a computationally cheaper alternative of Sobol's method. Optimization as well as the sensitivity experiments for the three periods (dry, wet and mixed), show a noticeable difference in parameter sensitivity and parameter convergence, indicating inadequacies in model formulation. Existence of a significant proportion of less sensitive parameters might be indicating an over-parametrized model. Borg MOEA showed great promise in optimizing the input parameters set. The optimized model modified using the site specific values for thermal roughness length parametrization shows an improvement in the performances of outgoing longwave radiation flux, overall surface temperature, heat storage flux and sensible heat flux.

  13. Contact sensitization in patients with suspected cosmetic intolerance: results of the IVDK 2006-2011.

    PubMed

    Dinkloh, A; Worm, M; Geier, J; Schnuch, A; Wollenberg, A

    2015-06-01

    Ingredients of leave-on cosmetics and body care products may sensitize. However, not every case of cosmetic intolerance is due to contact sensitization. To describe the frequency of contact sensitization due to cosmetics in a large clinic population, and a possible particular allergen pattern. Retrospective analysis of data from the Information Network of Departments of Dermatology, 2006-2011. Of 69 487 patients tested, 'cosmetics, creams, sunscreens' was the only suspected allergen source category in 10 124 patients (14.6%). A final diagnosis 'allergic contact dermatitis' was stated in 2658 of these patients (26.3%).Compared to a control group, there were significantly more reactions to fragrance mixes I and II, balsam of Peru, methylchloroisothiazolinone/methylisothiazolinone (MCI/MI) and lanolin alcohols. No special pattern of fragrance sensitization could be identified. Among the preservatives, MI was by far the leading allergen, while sensitization to other widely used compounds like parabens or phenoxyethanol was rare. True allergic reactions to cosmetic ingredients are rarer than generally assumed. Limitation of exposure to MI in leave-on cosmetics and body care products is urgently needed. © 2014 European Academy of Dermatology and Venereology.

  14. [Gastos hospitalarios por neumonía neumocóccica invasora en adultos en un hospital general en Chile].

    PubMed

    Alarcón, Álvaro; Lagos, Isabel; Fica, Alberto

    2016-08-01

    Pneumococcal infections are important for their morbidity and economic burden, but there is no economical data from adults patients in Chile. Estimate direct medical costs of bacteremic pneumococcal pneumonia among adult patients hospitalized in a general hospital and to evaluate the sensitivity of ICD 10 discharge codes to capture infections from this pathogen. Analysis of hospital charges by components in a group of patients admitted for bacteremic pneumococcal pneumonia, correction of values by inflation and conversion from CLP to US$. Data were collected from 59 patients admitted during 2005-2010, mean age 71.9 years. Average hospital charges for those managed in general wards reached 2,756 US$, 8,978 US$ for those managed in critical care units (CCU) and 6,025 for the whole group. Charges were higher in CCU (p < 0.001), and patients managed in these units generated 78.3% of the whole cost (n = 31; 52.5% from total). The median cost in general wards was 1,558 US$, and 3,993 in CCU. Main components were bed occupancy (37.8% of charges), and medications (27.4%). There were no differences associated to age, comorbidities, severity scores or mortality. No single ICD discharge code involved a S. pneumoniae bacteremic case (0% sensitivity) and only 2 cases were coded as pneumococcal pneumonia (3.4%). Mean hospital charges (~6,000 US dollars) or median values (~2,400 US dollars) were high, underlying the economic impact of this condition. Costs were higher among patients managed in CCU. Recognition of bacteremic pneumococcal infections by ICD 10 discharge codes has a very low sensitivity.

  15. Gastric adenocarcinoma screening and prevention in the era of new biomarker and endoscopic technologies: a cost-effectiveness analysis.

    PubMed

    Yeh, Jennifer M; Hur, Chin; Ward, Zachary; Schrag, Deborah; Goldie, Sue J

    2016-04-01

    To estimate the cost-effectiveness of noncardia gastric adenocarcinoma (NCGA) screening strategies based on new biomarker and endoscopic technologies. Using an intestinal-type NCGA microsimulation model, we evaluated the following one-time screening strategies for US men: (1) serum pepsinogen to detect gastric atrophy (with endoscopic follow-up of positive screen results), (2) endoscopic screening to detect dysplasia and asymptomatic cancer (with endoscopic mucosal resection (EMR) treatment for detected lesions) and (3) Helicobacter pylori screening and treatment. Screening performance, treatment effectiveness, cancer and cost data were based on published literature and databases. Subgroups included current, former and never smokers. Outcomes included lifetime cancer risk and incremental cost-effectiveness ratios (ICERs), expressed as cost per quality-adjusted-life-year (QALY) gained. Screening the general population at age 50 years reduced the lifetime intestinal-type NCGA risk (0.24%) by 26.4% with serum pepsinogen screening, 21.2% with endoscopy and EMR and 0.2% with H. pylori screening/treatment. Targeting current smokers reduced the lifetime risk (0.35%) by 30.8%, 25.5%, and 0.1%, respectively. For all subgroups, serum pepsinogen screening was more effective and more cost-effective than all other strategies, although its ICER varied from $76,000/QALY (current smokers) to $105,400/QALY (general population). Results were sensitive to H. pylori prevalence, screen age and serum pepsinogen test sensitivity. Probabilistic sensitivity analysis found that at a $100,000/QALY willingness-to-pay threshold, the probability that serum pepsinogen screening was preferred was 0.97 for current smokers. Although not warranted for the general population, targeting high-risk smokers for serum pepsinogen screening may be a cost-effective strategy to reduce intestinal-type NCGA mortality. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  16. HOMA-IR and QUICKI: decide on a general standard instead of making further comparisons.

    PubMed

    Rössner, Sophia M; Neovius, Martin; Mattsson, Anna; Marcus, Claude; Norgren, Svante

    2010-11-01

    To limit further comparisons between the two fasting indices Homeostasis Model Assessment for Insulin Resistance (HOMA-IR) and Quantitative Insulin Sensitivity Check Index (QUICKI), and to examine their robustness in assessing insulin sensitivity. A total of 191 obese children and adolescents (age 13.9 ± 2.9 years, BMI SDS 6.1 ± 1.6), who had undergone a Frequently Sampled Intravenous Glucose Tolerance Test (FSIVGTT), were included. Receiver operating characteristic curve (ROC) analysis was used to compare indices in detecting insulin resistance and Bland-Altman plots to investigate agreement between three consecutive fasting samples when compared to using single samples. ROC analysis showed that the diagnostic accuracy was identical for QUICKI and HOMA-IR [area under the curve (AUC) boys 0.80, 95%CI 0.70-0.89; girls 0.80, 0.71-0.88], while insulin had a nonsignificantly lower AUC (boys 0.76, 0.66-0.87; girls 0.75, 0.66-0.84). Glucose did not perform better than chance as a diagnostic test (boys 0.47, 0.34-0.60; girls 0.57, 0.46-0.68). Indices varied with consecutive sampling, mainly attributable to fasting insulin variations (mean maximum difference in HOMA-IR -0.8; -0.9 to -0.7). Using both HOMA-IR and QUICKI in further studies is superfluous as these indices function equally well as predictors of the FSIVGTT sensitivity index. Focus should be on establishing a general standard for research and clinical purposes. © 2010 The Author(s)/Journal Compilation © 2010 Foundation Acta Paediatrica.

  17. Theoretical foundations for traditional and generalized sensitivity functions for nonlinear delay differential equations.

    PubMed

    Banks, H Thomas; Robbins, Danielle; Sutton, Karyn L

    2013-01-01

    In this paper we present new results for differentiability of delay systems with respect to initial conditions and delays. After motivating our results with a wide range of delay examples arising in biology applications, we further note the need for sensitivity functions (both traditional and generalized sensitivity functions), especially in control and estimation problems. We summarize general existence and uniqueness results before turning to our main results on differentiation with respect to delays, etc. Finally we discuss use of our results in the context of estimation problems.

  18. Critical analysis of radiologist-patient interaction.

    PubMed

    Morris, K J; Tarico, V S; Smith, W L; Altmaier, E M; Franken, E A

    1987-05-01

    A critical incident interview technique was used to identify features of radiologist-patient interactions considered effective and ineffective by patients. During structured interviews with 35 radiology patients and five patients' parents, three general categories of physician behavior were described: attention to patient comfort, explanation of procedure and results, and interpersonal sensitivity. The findings indicated that patients are sensitive to physicians' interpersonal styles and that they want physicians to explain procedures and results in an understandable manner and to monitor their well-being during procedures. The sample size of the study is small; thus further confirmation is needed. However, the implications for training residents and practicing radiologists in these behaviors are important in the current competitive medical milieu.

  19. From Lexical Tone to Lexical Stress: A Cross-Language Mediation Model for Cantonese Children Learning English as a Second Language

    PubMed Central

    Choi, William; Tong, Xiuli; Singh, Leher

    2017-01-01

    This study investigated how Cantonese lexical tone sensitivity contributed to English lexical stress sensitivity among Cantonese children who learned English as a second language (ESL). Five-hundred-and-sixteen second-to-third grade Cantonese ESL children were tested on their Cantonese lexical tone sensitivity, English lexical stress sensitivity, general auditory sensitivity, and working memory. Structural equation modeling revealed that Cantonese lexical tone sensitivity contributed to English lexical stress sensitivity both directly, and indirectly through the mediation of general auditory sensitivity, in which the direct pathway had a larger relative contribution to English lexical stress sensitivity than the indirect pathway. These results suggest that the tone-stress association might be accounted for by joint phonological and acoustic processes that underlie lexical tone and lexical stress perception. PMID:28408898

  20. Intelligent interface design and evaluation

    NASA Technical Reports Server (NTRS)

    Greitzer, Frank L.

    1988-01-01

    Intelligent interface concepts and systematic approaches to assessing their functionality are discussed. Four general features of intelligent interfaces are described: interaction efficiency, subtask automation, context sensitivity, and use of an appropriate design metaphor. Three evaluation methods are discussed: Functional Analysis, Part-Task Evaluation, and Operational Testing. Design and evaluation concepts are illustrated with examples from a prototype expert system interface for environmental control and life support systems for manned space platforms.

  1. Accuracy of screening women at familial risk of breast cancer without a known gene mutation: Individual patient data meta-analysis.

    PubMed

    Phi, Xuan-Anh; Houssami, Nehmat; Hooning, Maartje J; Riedl, Christopher C; Leach, Martin O; Sardanelli, Francesco; Warner, Ellen; Trop, Isabelle; Saadatmand, Sepideh; Tilanus-Linthorst, Madeleine M A; Helbich, Thomas H; van den Heuvel, Edwin R; de Koning, Harry J; Obdeijn, Inge-Marie; de Bock, Geertruida H

    2017-11-01

    Women with a strong family history of breast cancer (BC) and without a known gene mutation have an increased risk of developing BC. We aimed to investigate the accuracy of screening using annual mammography with or without magnetic resonance imaging (MRI) for these women outside the general population screening program. An individual patient data (IPD) meta-analysis was conducted using IPD from six prospective screening trials that had included women at increased risk for BC: only women with a strong familial risk for BC and without a known gene mutation were included in this analysis. A generalised linear mixed model was applied to estimate and compare screening accuracy (sensitivity, specificity and predictive values) for annual mammography with or without MRI. There were 2226 women (median age: 41 years, interquartile range 35-47) with 7478 woman-years of follow-up, with a BC rate of 12 (95% confidence interval 9.3-14) in 1000 woman-years. Mammography screening had a sensitivity of 55% (standard error of mean [SE] 7.0) and a specificity of 94% (SE 1.3). Screening with MRI alone had a sensitivity of 89% (SE 4.6) and a specificity of 83% (SE 2.8). Adding MRI to mammography increased sensitivity to 98% (SE 1.8, P < 0.01 compared to mammography alone) but lowered specificity to 79% (SE 2.7, P < 0.01 compared with mammography alone). In this population of women with strong familial BC risk but without a known gene mutation, in whom BC incidence was high both before and after age 50, adding MRI to mammography substantially increased screening sensitivity but also decreased its specificity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    USGS Publications Warehouse

    Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust decisions.

  3. The difference between temperate and tropical saltwater species' acute sensitivity to chemicals is relatively small.

    PubMed

    Wang, Zhen; Kwok, Kevin W H; Lui, Gilbert C S; Zhou, Guang-Jie; Lee, Jae-Seong; Lam, Michael H W; Leung, Kenneth M Y

    2014-06-01

    Due to a lack of saltwater toxicity data in tropical regions, toxicity data generated from temperate or cold water species endemic to North America and Europe are often adopted to derive water quality guidelines (WQG) for protecting tropical saltwater species. If chemical toxicity to most saltwater organisms increases with water temperature, the use of temperate species data and associated WQG may result in under-protection to tropical species. Given the differences in species composition and environmental attributes between tropical and temperate saltwater ecosystems, there are conceivable uncertainties in such 'temperate-to-tropic' extrapolations. This study aims to compare temperate and tropical saltwater species' acute sensitivity to 11 chemicals through a comprehensive meta-analysis, by comparing species sensitivity distributions (SSDs) between the two groups. A 10 percentile hazardous concentration (HC10) is derived from each SSD, and then a temperate-to-tropic HC10 ratio is computed for each chemical. Our results demonstrate that temperate and tropical saltwater species display significantly different sensitivity towards all test chemicals except cadmium, although such differences are small with the HC10 ratios ranging from 0.094 (un-ionised ammonia) to 2.190 (pentachlorophenol) only. Temperate species are more sensitive to un-ionised ammonia, chromium, lead, nickel and tributyltin, whereas tropical species are more sensitive to copper, mercury, zinc, phenol and pentachlorophenol. Through comparison of a limited number of taxon-specific SSDs, we observe that there is a general decline in chemical sensitivity from algae to crustaceans, molluscs and then fishes. Following a statistical analysis of the results, we recommend an extrapolation factor of two for deriving tropical WQG from temperate information. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. On simple aerodynamic sensitivity derivatives for use in interdisciplinary optimization

    NASA Technical Reports Server (NTRS)

    Doggett, Robert V., Jr.

    1991-01-01

    Low-aspect-ratio and piston aerodynamic theories are reviewed as to their use in developing aerodynamic sensitivity derivatives for use in multidisciplinary optimization applications. The basic equations relating surface pressure (or lift and moment) to normal wash are given and discussed briefly for each theory. The general means for determining selected sensitivity derivatives are pointed out. In addition, some suggestions in very general terms are included as to sample problems for use in studying the process of using aerodynamic sensitivity derivatives in optimization studies.

  5. Bubbles, Gating, and Anesthetics in Ion Channels

    PubMed Central

    Roth, Roland; Gillespie, Dirk; Nonner, Wolfgang; Eisenberg, Robert E.

    2008-01-01

    We suggest that bubbles are the bistable hydrophobic gates responsible for the on-off transitions of single channel currents. In this view, many types of channels gate by the same physical mechanism—dewetting by capillary evaporation—but different types of channels use different sensors to modulate hydrophobic properties of the channel wall and thereby trigger and control bubbles and gating. Spontaneous emptying of channels has been seen in many simulations. Because of the physics involved, such phase transitions are inherently sensitive, unstable threshold phenomena that are difficult to simulate reproducibly and thus convincingly. We present a thermodynamic analysis of a bubble gate using morphometric density functional theory of classical (not quantum) mechanics. Thermodynamic analysis of phase transitions is generally more reproducible and less sensitive to details than simulations. Anesthetic actions of inert gases—and their interactions with hydrostatic pressure (e.g., nitrogen narcosis)—can be easily understood by actions on bubbles. A general theory of gas anesthesia may involve bubbles in channels. Only experiments can show whether, or when, or which channels actually use bubbles as hydrophobic gates: direct observation of bubbles in channels is needed. Existing experiments show thin gas layers on hydrophobic surfaces in water and suggest that bubbles nearly exist in bulk water. PMID:18234836

  6. A selection model for accounting for publication bias in a full network meta-analysis.

    PubMed

    Mavridis, Dimitris; Welton, Nicky J; Sutton, Alex; Salanti, Georgia

    2014-12-30

    Copas and Shi suggested a selection model to explore the potential impact of publication bias via sensitivity analysis based on assumptions for the probability of publication of trials conditional on the precision of their results. Chootrakool et al. extended this model to three-arm trials but did not fully account for the implications of the consistency assumption, and their model is difficult to generalize for complex network structures with more than three treatments. Fitting these selection models within a frequentist setting requires maximization of a complex likelihood function, and identification problems are common. We have previously presented a Bayesian implementation of the selection model when multiple treatments are compared with a common reference treatment. We now present a general model suitable for complex, full network meta-analysis that accounts for consistency when adjusting results for publication bias. We developed a design-by-treatment selection model to describe the mechanism by which studies with different designs (sets of treatments compared in a trial) and precision may be selected for publication. We fit the model in a Bayesian setting because it avoids the numerical problems encountered in the frequentist setting, it is generalizable with respect to the number of treatments and study arms, and it provides a flexible framework for sensitivity analysis using external knowledge. Our model accounts for the additional uncertainty arising from publication bias more successfully compared to the standard Copas model or its previous extensions. We illustrate the methodology using a published triangular network for the failure of vascular graft or arterial patency. Copyright © 2014 John Wiley & Sons, Ltd.

  7. Cost-Effectiveness of a Model Infection Control Program for Preventing Multi-Drug-Resistant Organism Infections in Critically Ill Surgical Patients.

    PubMed

    Jayaraman, Sudha P; Jiang, Yushan; Resch, Stephen; Askari, Reza; Klompas, Michael

    2016-10-01

    Interventions to contain two multi-drug-resistant Acinetobacter (MDRA) outbreaks reduced the incidence of multi-drug-resistant (MDR) organisms, specifically methicillin-resistant Staphylococcus aureus, vancomycin-resistant Enterococcus, and Clostridium difficile in the general surgery intensive care unit (ICU) of our hospital. We therefore conducted a cost-effective analysis of a proactive model infection-control program to reduce transmission of MDR organisms based on the practices used to control the MDRA outbreak. We created a model of a proactive infection control program based on the 2011 MDRA outbreak response. We built a decision analysis model and performed univariable and probabilistic sensitivity analyses to evaluate the cost-effectiveness of the proposed program compared with standard infection control practices to reduce transmission of these MDR organisms. The cost of a proactive infection control program would be $68,509 per year. The incremental cost-effectiveness ratio (ICER) was calculated to be $3,804 per aversion of transmission of MDR organisms in a one-year period compared with standard infection control. On the basis of probabilistic sensitivity analysis, a willingness-to-pay (WTP) threshold of $14,000 per transmission averted would have a 42% probability of being cost-effective, rising to 100% at $22,000 per transmission averted. This analysis gives an estimated ICER for implementing a proactive program to prevent transmission of MDR organisms in the general surgery ICU. To better understand the causal relations between the critical steps in the program and the rate reductions, a randomized study of a package of interventions to prevent healthcare-associated infections should be considered.

  8. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less

  9. The NEXUS criteria are insufficient to exclude cervical spine fractures in older blunt trauma patients.

    PubMed

    Paykin, Gabriel; O'Reilly, Gerard; Ackland, Helen M; Mitra, Biswadev

    2017-05-01

    The National Emergency X-Radiography Utilization Study (NEXUS) criteria are used to assess the need for imaging to evaluate cervical spine integrity after injury. The aim of this study was to assess the sensitivity of the NEXUS criteria in older blunt trauma patients. Patients aged 65 years or older presenting between 1st July 2010 and 30th June 2014 and diagnosed with cervical spine fractures were identified from the institutional trauma registry. Clinical examination findings were extracted from electronic medical records. Data on the NEXUS criteria were collected and sensitivity of the rule to exclude a fracture was calculated. Over the study period 231,018 patients presented to The Alfred Emergency & Trauma Centre, of whom 14,340 met the institutional trauma registry inclusion criteria and 4035 were aged ≥65years old. Among these, 468 patients were diagnosed with cervical spine fractures, of whom 21 were determined to be NEXUS negative. The NEXUS criteria performed with a sensitivity of 94.8% [95% CI: 92.1%-96.7%] on complete case analysis in older blunt trauma patients. One-way sensitivity analysis resulted in a maximum sensitivity limit of 95.5% [95% CI: 93.2%-97.2%]. Compared with the general adult blunt trauma population, the NEXUS criteria are less sensitive in excluding cervical spine fractures in older blunt trauma patients. We therefore suggest that liberal imaging be considered for older patients regardless of history or examination findings and that the addition of an age criterion to the NEXUS criteria be investigated in future studies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. [Evaluation of land resources carrying capacity of development zone based on planning environment impact assessment].

    PubMed

    Fu, Shi-Feng; Zhang, Ping; Jiang, Jin-Long

    2012-02-01

    Assessment of land resources carrying capacity is the key point of planning environment impact assessment and the main foundation to determine whether the planning could be implemented or not. With the help of the space analysis function of Geographic Information System, and selecting altitude, slope, land use type, distance from resident land, distance from main traffic roads, and distance from environmentally sensitive area as the sensitive factors, a comprehensive assessment on the ecological sensitivity and its spatial distribution in Zhangzhou Merchants Economic and Technological Development Zone, Fujian Province of East China was conducted, and the assessment results were combined with the planning land layout diagram for the ecological suitability analysis. In the Development Zone, 84.0% of resident land, 93.1% of industrial land, 86.0% of traffic land, and 76. 0% of other constructive lands in planning were located in insensitive and gently sensitive areas, and thus, the implement of the land use planning generally had little impact on the ecological environment, and the land resources in the planning area was able to meet the land use demand. The assessment of the population carrying capacity with ecological land as the limiting factor indicated that in considering the highly sensitive area and 60% of the moderately sensitive area as ecological land, the population within the Zone in the planning could reach 240000, and the available land area per capita could be 134.0 m2. Such a planned population scale is appropriate, according to the related standards of constructive land.

  11. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    DOE PAGES

    Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; ...

    2016-04-01

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less

  12. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    NASA Astrophysics Data System (ADS)

    Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole

    2016-04-01

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.

  13. Pyrotechnic hazards classification and evaluation program. Phase 2, segment 1: Records and experience analysis

    NASA Technical Reports Server (NTRS)

    1970-01-01

    A comprehensive search review and analysis was made of various technical documents relating to both pyrotechnics and high explosives testing, handling, storage, manufacturing, physical and chemical characteristics and accidents and incidents. Of approximately 5000 technical abstracts reviewed, 300 applicable documents were analyzed in detail. These 300 documents were then converted to a subject matrix so that they may be readily referenced for application to the current programs. It was generally concluded that information in several important categories was lacking. Two of the more important categories were in pyrotechnics sensitivity testing and TNT equivalency testing. A general recommendation resulting from this study was that this activity continue and a comprehensive data bank be generated that would allow immediate access to a large volume of pertinent information in a relatively short period of time.

  14. Identification of Shiga-Toxigenic Escherichia coli outbreak isolates by a novel data analysis tool after matrix-assisted laser desorption/ionization time-of-flight mass spectrometry.

    PubMed

    Christner, Martin; Dressler, Dirk; Andrian, Mark; Reule, Claudia; Petrini, Orlando

    2017-01-01

    The fast and reliable characterization of bacterial and fungal pathogens plays an important role in infectious disease control and tracking of outbreak agents. DNA based methods are the gold standard for epidemiological investigations, but they are still comparatively expensive and time-consuming. Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) is a fast, reliable and cost-effective technique now routinely used to identify clinically relevant human pathogens. It has been used for subspecies differentiation and typing, but its use for epidemiological tasks, e. g. for outbreak investigations, is often hampered by the complexity of data analysis. We have analysed publicly available MALDI-TOF mass spectra from a large outbreak of Shiga-Toxigenic Escherichia coli in northern Germany using a general purpose software tool for the analysis of complex biological data. The software was challenged with depauperate spectra and reduced learning group sizes to mimic poor spectrum quality and scarcity of reference spectra at the onset of an outbreak. With high quality formic acid extraction spectra, the software's built in classifier accurately identified outbreak related strains using as few as 10 reference spectra (99.8% sensitivity, 98.0% specificity). Selective variation of processing parameters showed impaired marker peak detection and reduced classification accuracy in samples with high background noise or artificially reduced peak counts. However, the software consistently identified mass signals suitable for a highly reliable marker peak based classification approach (100% sensitivity, 99.5% specificity) even from low quality direct deposition spectra. The study demonstrates that general purpose data analysis tools can effectively be used for the analysis of bacterial mass spectra.

  15. A sensitivity analysis for a thermomechanical model of the Antarctic ice sheet and ice shelves

    NASA Astrophysics Data System (ADS)

    Baratelli, F.; Castellani, G.; Vassena, C.; Giudici, M.

    2012-04-01

    The outcomes of an ice sheet model depend on a number of parameters and physical quantities which are often estimated with large uncertainty, because of lack of sufficient experimental measurements in such remote environments. Therefore, the efforts to improve the accuracy of the predictions of ice sheet models by including more physical processes and interactions with atmosphere, hydrosphere and lithosphere can be affected by the inaccuracy of the fundamental input data. A sensitivity analysis can help to understand which are the input data that most affect the different predictions of the model. In this context, a finite difference thermomechanical ice sheet model based on the Shallow-Ice Approximation (SIA) and on the Shallow-Shelf Approximation (SSA) has been developed and applied for the simulation of the evolution of the Antarctic ice sheet and ice shelves for the last 200 000 years. The sensitivity analysis of the model outcomes (e.g., the volume of the ice sheet and of the ice shelves, the basal melt rate of the ice sheet, the mean velocity of the Ross and Ronne-Filchner ice shelves, the wet area at the base of the ice sheet) with respect to the model parameters (e.g., the basal sliding coefficient, the geothermal heat flux, the present-day surface accumulation and temperature, the mean ice shelves viscosity, the melt rate at the base of the ice shelves) has been performed by computing three synthetic numerical indices: two local sensitivity indices and a global sensitivity index. Local sensitivity indices imply a linearization of the model and neglect both non-linear and joint effects of the parameters. The global variance-based sensitivity index, instead, takes into account the complete variability of the input parameters but is usually conducted with a Monte Carlo approach which is computationally very demanding for non-linear complex models. Therefore, the global sensitivity index has been computed using a development of the model outputs in a neighborhood of the reference parameter values with a second-order approximation. The comparison of the three sensitivity indices proved that the approximation of the non-linear model with a second-order expansion is sufficient to show some differences between the local and the global indices. As a general result, the sensitivity analysis showed that most of the model outcomes are mainly sensitive to the present-day surface temperature and accumulation, which, in principle, can be measured more easily (e.g., with remote sensing techniques) than the other input parameters considered. On the other hand, the parameters to which the model resulted less sensitive are the basal sliding coefficient and the mean ice shelves viscosity.

  16. A comparative analysis of area navigation systems in general aviation. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Dodge, S. M.

    1973-01-01

    Radio navigation systems which offer the capabilities of area navigation to general aviation operators are discussed. The systems considered are: (1) the VORTAC system, (2) the Loran-C system, and (3) the Differential Omega system. The inital analyses are directed toward a comparison of the systems with respect to their compliance to specified performance parameters and to the cost effectiveness of each system in relation to those specifications. Further analyses lead to the development of system cost sensitivity charts, and the employment of these charts allows conclusions to be drawn relative to the cost-effectiveness of the candidate navigation system.

  17. Periodic matrix population models: growth rate, basic reproduction number, and entropy.

    PubMed

    Bacaër, Nicolas

    2009-10-01

    This article considers three different aspects of periodic matrix population models. First, a formula for the sensitivity analysis of the growth rate lambda is obtained that is simpler than the one obtained by Caswell and Trevisan. Secondly, the formula for the basic reproduction number R0 in a constant environment is generalized to the case of a periodic environment. Some inequalities between lambda and R0 proved by Cushing and Zhou are also generalized to the periodic case. Finally, we add some remarks on Demetrius' notion of evolutionary entropy H and its relationship to the growth rate lambda in the periodic case.

  18. Multidisciplinary optimization of a controlled space structure using 150 design variables

    NASA Technical Reports Server (NTRS)

    James, Benjamin B.

    1992-01-01

    A general optimization-based method for the design of large space platforms through integration of the disciplines of structural dynamics and control is presented. The method uses the global sensitivity equations approach and is especially appropriate for preliminary design problems in which the structural and control analyses are tightly coupled. The method is capable of coordinating general purpose structural analysis, multivariable control, and optimization codes, and thus, can be adapted to a variety of controls-structures integrated design projects. The method is used to minimize the total weight of a space platform while maintaining a specified vibration decay rate after slewing maneuvers.

  19. Analysis of image formation in optical coherence elastography using a multiphysics approach

    PubMed Central

    Chin, Lixin; Curatolo, Andrea; Kennedy, Brendan F.; Doyle, Barry J.; Munro, Peter R. T.; McLaughlin, Robert A.; Sampson, David D.

    2014-01-01

    Image formation in optical coherence elastography (OCE) results from a combination of two processes: the mechanical deformation imparted to the sample and the detection of the resulting displacement using optical coherence tomography (OCT). We present a multiphysics model of these processes, validated by simulating strain elastograms acquired using phase-sensitive compression OCE, and demonstrating close correspondence with experimental results. Using the model, we present evidence that the approximation commonly used to infer sample displacement in phase-sensitive OCE is invalidated for smaller deformations than has been previously considered, significantly affecting the measurement precision, as quantified by the displacement sensitivity and the elastogram signal-to-noise ratio. We show how the precision of OCE is affected not only by OCT shot-noise, as is usually considered, but additionally by phase decorrelation due to the sample deformation. This multiphysics model provides a general framework that could be used to compare and contrast different OCE techniques. PMID:25401007

  20. Mixture models in diagnostic meta-analyses--clustering summary receiver operating characteristic curves accounted for heterogeneity and correlation.

    PubMed

    Schlattmann, Peter; Verba, Maryna; Dewey, Marc; Walther, Mario

    2015-01-01

    Bivariate linear and generalized linear random effects are frequently used to perform a diagnostic meta-analysis. The objective of this article was to apply a finite mixture model of bivariate normal distributions that can be used for the construction of componentwise summary receiver operating characteristic (sROC) curves. Bivariate linear random effects and a bivariate finite mixture model are used. The latter model is developed as an extension of a univariate finite mixture model. Two examples, computed tomography (CT) angiography for ruling out coronary artery disease and procalcitonin as a diagnostic marker for sepsis, are used to estimate mean sensitivity and mean specificity and to construct sROC curves. The suggested approach of a bivariate finite mixture model identifies two latent classes of diagnostic accuracy for the CT angiography example. Both classes show high sensitivity but mainly two different levels of specificity. For the procalcitonin example, this approach identifies three latent classes of diagnostic accuracy. Here, sensitivities and specificities are quite different as such that sensitivity increases with decreasing specificity. Additionally, the model is used to construct componentwise sROC curves and to classify individual studies. The proposed method offers an alternative approach to model between-study heterogeneity in a diagnostic meta-analysis. Furthermore, it is possible to construct sROC curves even if a positive correlation between sensitivity and specificity is present. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. General anaesthetics and the acetylcholine-sensitivity of cortical neurons.

    PubMed Central

    Smaje, J C

    1976-01-01

    1The effects of general anaesthetics on neuronal responses to iontophoretically-applied acetylcholine have been examined in slices of guinea-pig olfactory cortex maintained in vitro. 2 Acetylcholine excited 61% of the prepiriform neurones tested. The excitation was blocked by atropine, but not by dihydro-beta-erythroidine or gallamine. 3 Alphaxalone reversibly depressed the acetylcholine-sensitivity of prepiriform neurones. Pentobarbitone did not consistently depress the acetylcholine sensitivity of these cells. 4 Ether, methoxyflurane, trichloroethylene and halothane caused a dose-related augmentation of acetylcholine-induced firing. 5 These results show that general anaesthetics do not necessarily depress the sensitivity of nerve cells to all excitatory substances and that different anaesthetics may affect a particular excitatory process in various ways. PMID:990586

  2. Margin and sensitivity methods for security analysis of electric power systems

    NASA Astrophysics Data System (ADS)

    Greene, Scott L.

    Reliable operation of large scale electric power networks requires that system voltages and currents stay within design limits. Operation beyond those limits can lead to equipment failures and blackouts. Security margins measure the amount by which system loads or power transfers can change before a security violation, such as an overloaded transmission line, is encountered. This thesis shows how to efficiently compute security margins defined by limiting events and instabilities, and the sensitivity of those margins with respect to assumptions, system parameters, operating policy, and transactions. Security margins to voltage collapse blackouts, oscillatory instability, generator limits, voltage constraints and line overloads are considered. The usefulness of computing the sensitivities of these margins with respect to interarea transfers, loading parameters, generator dispatch, transmission line parameters, and VAR support is established for networks as large as 1500 buses. The sensitivity formulas presented apply to a range of power system models. Conventional sensitivity formulas such as line distribution factors, outage distribution factors, participation factors and penalty factors are shown to be special cases of the general sensitivity formulas derived in this thesis. The sensitivity formulas readily accommodate sparse matrix techniques. Margin sensitivity methods are shown to work effectively for avoiding voltage collapse blackouts caused by either saddle node bifurcation of equilibria or immediate instability due to generator reactive power limits. Extremely fast contingency analysis for voltage collapse can be implemented with margin sensitivity based rankings. Interarea transfer can be limited by voltage limits, line limits, or voltage stability. The sensitivity formulas presented in this thesis apply to security margins defined by any limit criteria. A method to compute transfer margins by directly locating intermediate events reduces the total number of loadflow iterations required by each margin computation and provides sensitivity information at minimal additional cost. Estimates of the effect of simultaneous transfers on the transfer margins agree well with the exact computations for a network model derived from a portion of the U.S grid. The accuracy of the estimates over a useful range of conditions and the ease of obtaining the estimates suggest that the sensitivity computations will be of practical value.

  3. Combined Use of Integral Experiments and Covariance Data

    NASA Astrophysics Data System (ADS)

    Palmiotti, G.; Salvatores, M.; Aliberti, G.; Herman, M.; Hoblit, S. D.; McKnight, R. D.; Obložinský, P.; Talou, P.; Hale, G. M.; Hiruta, H.; Kawano, T.; Mattoon, C. M.; Nobre, G. P. A.; Palumbo, A.; Pigni, M.; Rising, M. E.; Yang, W.-S.; Kahler, A. C.

    2014-04-01

    In the frame of a US-DOE sponsored project, ANL, BNL, INL and LANL have performed a joint multidisciplinary research activity in order to explore the combined use of integral experiments and covariance data with the objective to both give quantitative indications on possible improvements of the ENDF evaluated data files and to reduce at the same time crucial reactor design parameter uncertainties. Methods that have been developed in the last four decades for the purposes indicated above have been improved by some new developments that benefited also by continuous exchanges with international groups working in similar areas. The major new developments that allowed significant progress are to be found in several specific domains: a) new science-based covariance data; b) integral experiment covariance data assessment and improved experiment analysis, e.g., of sample irradiation experiments; c) sensitivity analysis, where several improvements were necessary despite the generally good understanding of these techniques, e.g., to account for fission spectrum sensitivity; d) a critical approach to the analysis of statistical adjustments performance, both a priori and a posteriori; e) generalization of the assimilation method, now applied for the first time not only to multigroup cross sections data but also to nuclear model parameters (the "consistent" method). This article describes the major results obtained in each of these areas; a large scale nuclear data adjustment, based on the use of approximately one hundred high-accuracy integral experiments, will be reported along with a significant example of the application of the new "consistent" method of data assimilation.

  4. Using computer-based video analysis in the study of fidgety movements.

    PubMed

    Adde, Lars; Helbostad, Jorunn L; Jensenius, Alexander Refsum; Taraldsen, Gunnar; Støen, Ragnhild

    2009-09-01

    Absence of fidgety movements (FM) in high-risk infants is a strong marker for later cerebral palsy (CP). FMs can be classified by the General Movement Assessment (GMA), based on Gestalt perception of the infant's movement pattern. More objective movement analysis may be provided by computer-based technology. The aim of this study was to explore the feasibility of a computer-based video analysis of infants' spontaneous movements in classifying non-fidgety versus fidgety movements. GMA was performed from video material of the fidgety period in 82 term and preterm infants at low and high risks of developing CP. The same videos were analysed using the developed software called General Movement Toolbox (GMT) with visualisation of the infant's movements for qualitative analyses. Variables derived from the calculation of displacement of pixels from one video frame to the next were used for quantitative analyses. Visual representations from GMT showed easily recognisable patterns of FMs. Of the eight quantitative variables derived, the variability in displacement of a spatial centre of active pixels in the image had the highest sensitivity (81.5) and specificity (70.0) in classifying FMs. By setting triage thresholds at 90% sensitivity and specificity for FM, the need for further referral was reduced by 70%. Video recordings can be used for qualitative and quantitative analyses of FMs provided by GMT. GMT is easy to implement in clinical practice, and may provide assistance in detecting infants without FMs.

  5. Epidemiology of pediatric nickel sensitivity: Retrospective review of North American Contact Dermatitis Group (NACDG) data 1994-2014.

    PubMed

    Warshaw, Erin M; Aschenbeck, Kelly A; DeKoven, Joel G; Maibach, Howard I; Taylor, James S; Sasseville, Denis; Belsito, Donald V; Fowler, Joseph F; Zug, Kathryn A; Zirwas, Matthew J; Fransway, Anthony F; DeLeo, Vincent A; Marks, James G; Pratt, Melanie D; Mathias, Toby

    2018-04-14

    Nickel is a common allergen responsible for allergic contact dermatitis. To characterize nickel sensitivity in children and compare pediatric cohorts (≤5, 6-12, and 13-18 years). Retrospective, cross-sectional analysis of 1894 pediatric patients patch tested by the North American Contact Dermatitis Group from 1994 to 2014. We evaluated demographics, rates of reaction to nickel, strength of nickel reactions, and nickel allergy sources. The frequency of nickel sensitivity was 23.7%. Children with nickel sensitivity were significantly less likely to be male (P < .0001; relative risk, 0.63; 95% confidence interval, 0.52-0.75) or have a history of allergic rhinitis (P = .0017; relative risk, 0.74; 95% confidence interval, 0.61-0.90) compared with those who were not nickel sensitive. In the nickel-sensitive cohort, the relative proportion of boys declined with age (44.8% for age ≤5, 36.6% for age 6-12, and 22.6% for age 13-18 years). The most common body site distribution for all age groups sensitive to nickel was scattered/generalized, indicating widespread dermatitis. Jewelry was the most common source associated with nickel sensitivity (36.4%). As a cross-sectional study, no long-term follow-up was available. Nickel sensitivity in children was common; the frequency was significantly higher in girls than in boys. Overall, sensitivity decreased with age. The most common source of nickel was jewelry. Published by Elsevier Inc.

  6. Learning effect and test-retest variability of pulsar perimetry.

    PubMed

    Salvetat, Maria Letizia; Zeppieri, Marco; Parisi, Lucia; Johnson, Chris A; Sampaolesi, Roberto; Brusini, Paolo

    2013-03-01

    To assess Pulsar Perimetry learning effect and test-retest variability (TRV) in normal (NORM), ocular hypertension (OHT), glaucomatous optic neuropathy (GON), and primary open-angle glaucoma (POAG) eyes. This multicenter prospective study included 43 NORM, 38 OHT, 33 GON, and 36 POAG patients. All patients underwent standard automated perimetry and Pulsar Contrast Perimetry using white stimuli modulated in phase and counterphase at 30 Hz (CP-T30W test). The learning effect and TRV for Pulsar Perimetry were assessed for 3 consecutive visual fields (VFs). The learning effect were evaluated by comparing results from the first session with the other 2. TRV was assessed by calculating the mean of the differences (in absolute value) between retests for each combination of single tests. TRV was calculated for Mean Sensitivity, Mean Defect, and single Mean Sensitivity for each 66 test locations. Influence of age, VF eccentricity, and loss severity on TRV were assessed using linear regression analysis and analysis of variance. The learning effect was not significant in any group (analysis of variance, P>0.05). TRV for Mean Sensitivity and Mean Defect was significantly lower in NORM and OHT (0.6 ± 0.5 spatial resolution contrast units) than in GON and POAG (0.9 ± 0.5 and 1.0 ± 0.8 spatial resolution contrast units, respectively) (Kruskal-Wallis test, P=0.04); however, the differences in NORM among age groups was not significant (Kruskal-Wallis test, P>0.05). Slight significant differences were found for the single Mean Sensitivity TRV among single locations (Duncan test, P<0.05). For POAG, TRV significantly increased with decreasing Mean Sensitivity and increasing Mean Defect (linear regression analysis, P<0.01). The Pulsar Perimetry CP-T30W test did not show significant learning effect in patients with standard automated perimetry experience. TRV for global indices was generally low, and was not related to patient age; it was only slightly affected by VF defect eccentricity, and significantly influenced by VF loss severity.

  7. Simplification and its consequences in biological modelling: conclusions from a study of calcium oscillations in hepatocytes.

    PubMed

    Hetherington, James P J; Warner, Anne; Seymour, Robert M

    2006-04-22

    Systems Biology requires that biological modelling is scaled up from small components to system level. This can produce exceedingly complex models, which obscure understanding rather than facilitate it. The successful use of highly simplified models would resolve many of the current problems faced in Systems Biology. This paper questions whether the conclusions of simple mathematical models of biological systems are trustworthy. The simplification of a specific model of calcium oscillations in hepatocytes is examined in detail, and the conclusions drawn from this scrutiny generalized. We formalize our choice of simplification approach through the use of functional 'building blocks'. A collection of models is constructed, each a progressively more simplified version of a well-understood model. The limiting model is a piecewise linear model that can be solved analytically. We find that, as expected, in many cases the simpler models produce incorrect results. However, when we make a sensitivity analysis, examining which aspects of the behaviour of the system are controlled by which parameters, the conclusions of the simple model often agree with those of the richer model. The hypothesis that the simplified model retains no information about the real sensitivities of the unsimplified model can be very strongly ruled out by treating the simplification process as a pseudo-random perturbation on the true sensitivity data. We conclude that sensitivity analysis is, therefore, of great importance to the analysis of simple mathematical models in biology. Our comparisons reveal which results of the sensitivity analysis regarding calcium oscillations in hepatocytes are robust to the simplifications necessarily involved in mathematical modelling. For example, we find that if a treatment is observed to strongly decrease the period of the oscillations while increasing the proportion of the cycle during which cellular calcium concentrations are rising, without affecting the inter-spike or maximum calcium concentrations, then it is likely that the treatment is acting on the plasma membrane calcium pump.

  8. The utility of the bifactor model in understanding unique components of anxiety sensitivity in a South Korean sample.

    PubMed

    Ebesutani, Chad; Kim, Mirihae; Park, Hee-Hoon

    2016-08-01

    The present study was the first to examine the applicability of the bifactor structure underlying the Anxiety Sensitivity Index-3 (ASI-3) in an East Asian (South Korean) sample and to determine which factors in the bifactor model were significantly associated with anxiety, depression, and negative affect. Using a sample of 289 South Korean university students, we compared (a) the original 3-factor AS model, (b) a 3-group bifactor AS model, and (c) a 2-group bifactor AS model (with only the physical and social concern group factors present). Results revealed that the 2-group bifactor AS model fit the ASI-3 data the best. Relatedly, although all ASI-3 items loaded on the general AS factor, the Cognitive Concern group factor was not defined in the bifactor model and may therefore need to be omitted in order to accurately model AS when conducting factor analysis and structural equation modeling (SEM) in cross cultural contexts. SEM results also revealed that the general AS factor was the only factor from the 2-group bifactor model that significantly predicted anxiety, depression, and negative affect. Implications and importance of this new bifactor structure of Anxiety Sensitivity in East Asian samples are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Direct Detection of Nucleic Acid with Minimizing Background and Improving Sensitivity Based on a Conformation-Discriminating Indicator.

    PubMed

    Zhu, Lixuan; Qing, Zhihe; Hou, Lina; Yang, Sheng; Zou, Zhen; Cao, Zhong; Yang, Ronghua

    2017-08-25

    As is well-known, the nucleic acid indicator-based strategy is one of the major approaches to monitor the nucleic acid hybridization-mediated recognition events in biochemical analysis, displaying obvious advantages including simplicity, low cost, convenience, and generality. However, conventional indicators either hold strong self-fluorescence or can be lighted by both ssDNA and dsDNA, lacking absolute selectivity for a certain conformation, always with high background interference and low sensitivity in sensing; and additional processing (e.g., nanomaterial-mediated background suppression, and enzyme-catalyzed signal amplification) is generally required to improve the detection performance. In this work, a carbazole derivative, EBCB, has been synthesized and screened as a dsDNA-specific fluorescent indicator. Compared with conventional indicators under the same conditions, EBCB displayed a much higher selective coefficient for dsDNA, with little self-fluorescence and negligible effect from ssDNA. Based on its superior capability in DNA conformation-discrimination, high sensitivity with minimizing background interference was demonstrated for direct detection of nucleic acid, and monitoring nucleic acid-based circuitry with good reversibity, resulting in low detection limit and high capability for discriminating base-mismatching. Thus, we expect that this highly specific DNA conformation-discriminating indicator will hold good potential for application in biochemical sensing and molecular logic switching.

  10. The psychometrics of mental workload: multiple measures are sensitive but divergent.

    PubMed

    Matthews, Gerald; Reinerman-Jones, Lauren E; Barber, Daniel J; Abich, Julian

    2015-02-01

    A study was run to test the sensitivity of multiple workload indices to the differing cognitive demands of four military monitoring task scenarios and to investigate relationships between indices. Various psychophysiological indices of mental workload exhibit sensitivity to task factors. However, the psychometric properties of multiple indices, including the extent to which they intercorrelate, have not been adequately investigated. One hundred fifty participants performed in four task scenarios based on a simulation of unmanned ground vehicle operation. Scenarios required threat detection and/or change detection. Both single- and dual-task scenarios were used. Workload metrics for each scenario were derived from the electroencephalogram (EEG), electrocardiogram, transcranial Doppler sonography, functional near infrared, and eye tracking. Subjective workload was also assessed. Several metrics showed sensitivity to the differing demands of the four scenarios. Eye fixation duration and the Task Load Index metric derived from EEG were diagnostic of single-versus dual-task performance. Several other metrics differentiated the two single tasks but were less effective in differentiating single- from dual-task performance. Psychometric analyses confirmed the reliability of individual metrics but failed to identify any general workload factor. An analysis of difference scores between low- and high-workload conditions suggested an effort factor defined by heart rate variability and frontal cortex oxygenation. General workload is not well defined psychometrically, although various individual metrics may satisfy conventional criteria for workload assessment. Practitioners should exercise caution in using multiple metrics that may not correspond well, especially at the level of the individual operator.

  11. Accuracy of non-invasive prenatal testing using cell-free DNA for detection of Down, Edwards and Patau syndromes: a systematic review and meta-analysis.

    PubMed

    Taylor-Phillips, Sian; Freeman, Karoline; Geppert, Julia; Agbebiyi, Adeola; Uthman, Olalekan A; Madan, Jason; Clarke, Angus; Quenby, Siobhan; Clarke, Aileen

    2016-01-18

    To measure test accuracy of non-invasive prenatal testing (NIPT) for Down, Edwards and Patau syndromes using cell-free fetal DNA and identify factors affecting accuracy. Systematic review and meta-analysis of published studies. PubMed, Ovid Medline, Ovid Embase and the Cochrane Library published from 1997 to 9 February 2015, followed by weekly autoalerts until 1 April 2015. English language journal articles describing case-control studies with ≥ 15 trisomy cases or cohort studies with ≥ 50 pregnant women who had been given NIPT and a reference standard. 41, 37 and 30 studies of 2012 publications retrieved were included in the review for Down, Edwards and Patau syndromes. Quality appraisal identified high risk of bias in included studies, funnel plots showed evidence of publication bias. Pooled sensitivity was 99.3% (95% CI 98.9% to 99.6%) for Down, 97.4% (95.8% to 98.4%) for Edwards, and 97.4% (86.1% to 99.6%) for Patau syndrome. The pooled specificity was 99.9% (99.9% to 100%) for all three trisomies. In 100,000 pregnancies in the general obstetric population we would expect 417, 89 and 40 cases of Downs, Edwards and Patau syndromes to be detected by NIPT, with 94, 154 and 42 false positive results. Sensitivity was lower in twin than singleton pregnancies, reduced by 9% for Down, 28% for Edwards and 22% for Patau syndrome. Pooled sensitivity was also lower in the first trimester of pregnancy, in studies in the general obstetric population, and in cohort studies with consecutive enrolment. NIPT using cell-free fetal DNA has very high sensitivity and specificity for Down syndrome, with slightly lower sensitivity for Edwards and Patau syndrome. However, it is not 100% accurate and should not be used as a final diagnosis for positive cases. CRD42014014947. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  12. Identifying Talent in Youth Sport: A Novel Methodology Using Higher-Dimensional Analysis.

    PubMed

    Till, Kevin; Jones, Ben L; Cobley, Stephen; Morley, David; O'Hara, John; Chapman, Chris; Cooke, Carlton; Beggs, Clive B

    2016-01-01

    Prediction of adult performance from early age talent identification in sport remains difficult. Talent identification research has generally been performed using univariate analysis, which ignores multivariate relationships. To address this issue, this study used a novel higher-dimensional model to orthogonalize multivariate anthropometric and fitness data from junior rugby league players, with the aim of differentiating future career attainment. Anthropometric and fitness data from 257 Under-15 rugby league players was collected. Players were grouped retrospectively according to their future career attainment (i.e., amateur, academy, professional). Players were blindly and randomly divided into an exploratory (n = 165) and validation dataset (n = 92). The exploratory dataset was used to develop and optimize a novel higher-dimensional model, which combined singular value decomposition (SVD) with receiver operating characteristic analysis. Once optimized, the model was tested using the validation dataset. SVD analysis revealed 60 m sprint and agility 505 performance were the most influential characteristics in distinguishing future professional players from amateur and academy players. The exploratory dataset model was able to distinguish between future amateur and professional players with a high degree of accuracy (sensitivity = 85.7%, specificity = 71.1%; p<0.001), although it could not distinguish between future professional and academy players. The validation dataset model was able to distinguish future professionals from the rest with reasonable accuracy (sensitivity = 83.3%, specificity = 63.8%; p = 0.003). Through the use of SVD analysis it was possible to objectively identify criteria to distinguish future career attainment with a sensitivity over 80% using anthropometric and fitness data alone. As such, this suggests that SVD analysis may be a useful analysis tool for research and practice within talent identification.

  13. Identifying Talent in Youth Sport: A Novel Methodology Using Higher-Dimensional Analysis

    PubMed Central

    Till, Kevin; Jones, Ben L.; Cobley, Stephen; Morley, David; O'Hara, John; Chapman, Chris; Cooke, Carlton; Beggs, Clive B.

    2016-01-01

    Prediction of adult performance from early age talent identification in sport remains difficult. Talent identification research has generally been performed using univariate analysis, which ignores multivariate relationships. To address this issue, this study used a novel higher-dimensional model to orthogonalize multivariate anthropometric and fitness data from junior rugby league players, with the aim of differentiating future career attainment. Anthropometric and fitness data from 257 Under-15 rugby league players was collected. Players were grouped retrospectively according to their future career attainment (i.e., amateur, academy, professional). Players were blindly and randomly divided into an exploratory (n = 165) and validation dataset (n = 92). The exploratory dataset was used to develop and optimize a novel higher-dimensional model, which combined singular value decomposition (SVD) with receiver operating characteristic analysis. Once optimized, the model was tested using the validation dataset. SVD analysis revealed 60 m sprint and agility 505 performance were the most influential characteristics in distinguishing future professional players from amateur and academy players. The exploratory dataset model was able to distinguish between future amateur and professional players with a high degree of accuracy (sensitivity = 85.7%, specificity = 71.1%; p<0.001), although it could not distinguish between future professional and academy players. The validation dataset model was able to distinguish future professionals from the rest with reasonable accuracy (sensitivity = 83.3%, specificity = 63.8%; p = 0.003). Through the use of SVD analysis it was possible to objectively identify criteria to distinguish future career attainment with a sensitivity over 80% using anthropometric and fitness data alone. As such, this suggests that SVD analysis may be a useful analysis tool for research and practice within talent identification. PMID:27224653

  14. Computer-Aided Design of Low-Noise Microwave Circuits

    NASA Astrophysics Data System (ADS)

    Wedge, Scott William

    1991-02-01

    Devoid of most natural and manmade noise, microwave frequencies have detection sensitivities limited by internally generated receiver noise. Low-noise amplifiers are therefore critical components in radio astronomical antennas, communications links, radar systems, and even home satellite dishes. A general technique to accurately predict the noise performance of microwave circuits has been lacking. Current noise analysis methods have been limited to specific circuit topologies or neglect correlation, a strong effect in microwave devices. Presented here are generalized methods, developed for computer-aided design implementation, for the analysis of linear noisy microwave circuits comprised of arbitrarily interconnected components. Included are descriptions of efficient algorithms for the simultaneous analysis of noisy and deterministic circuit parameters based on a wave variable approach. The methods are therefore particularly suited to microwave and millimeter-wave circuits. Noise contributions from lossy passive components and active components with electronic noise are considered. Also presented is a new technique for the measurement of device noise characteristics that offers several advantages over current measurement methods.

  15. Sensitivity and Uncertainty Analysis for Streamflow Prediction Using Different Objective Functions and Optimization Algorithms: San Joaquin California

    NASA Astrophysics Data System (ADS)

    Paul, M.; Negahban-Azar, M.

    2017-12-01

    The hydrologic models usually need to be calibrated against observed streamflow at the outlet of a particular drainage area through a careful model calibration. However, a large number of parameters are required to fit in the model due to their unavailability of the field measurement. Therefore, it is difficult to calibrate the model for a large number of potential uncertain model parameters. This even becomes more challenging if the model is for a large watershed with multiple land uses and various geophysical characteristics. Sensitivity analysis (SA) can be used as a tool to identify most sensitive model parameters which affect the calibrated model performance. There are many different calibration and uncertainty analysis algorithms which can be performed with different objective functions. By incorporating sensitive parameters in streamflow simulation, effects of the suitable algorithm in improving model performance can be demonstrated by the Soil and Water Assessment Tool (SWAT) modeling. In this study, the SWAT was applied in the San Joaquin Watershed in California covering 19704 km2 to calibrate the daily streamflow. Recently, sever water stress escalating due to intensified climate variability, prolonged drought and depleting groundwater for agricultural irrigation in this watershed. Therefore it is important to perform a proper uncertainty analysis given the uncertainties inherent in hydrologic modeling to predict the spatial and temporal variation of the hydrologic process to evaluate the impacts of different hydrologic variables. The purpose of this study was to evaluate the sensitivity and uncertainty of the calibrated parameters for predicting streamflow. To evaluate the sensitivity of the calibrated parameters three different optimization algorithms (Sequential Uncertainty Fitting- SUFI-2, Generalized Likelihood Uncertainty Estimation- GLUE and Parameter Solution- ParaSol) were used with four different objective functions (coefficient of determination- r2, Nash-Sutcliffe efficiency- NSE, percent bias- PBIAS, and Kling-Gupta efficiency- KGE). The preliminary results showed that using the SUFI-2 algorithm with the objective function NSE and KGE has improved significantly the calibration (e.g. R2 and NSE is found 0.52 and 0.47 respectively for daily streamflow calibration).

  16. Performance of Polymerase Chain Reaction Analysis of the Amniotic Fluid of Pregnant Women for Diagnosis of Congenital Toxoplasmosis: A Systematic Review and Meta-Analysis.

    PubMed

    de Oliveira Azevedo, Christianne Terra; do Brasil, Pedro Emmanuel A A; Guida, Letícia; Lopes Moreira, Maria Elizabeth

    2016-01-01

    Congenital infection caused by Toxoplasma gondii can cause serious damage that can be diagnosed in utero or at birth, although most infants are asymptomatic at birth. Prenatal diagnosis of congenital toxoplasmosis considerably improves the prognosis and outcome for infected infants. For this reason, an assay for the quick, sensitive, and safe diagnosis of fetal toxoplasmosis is desirable. To systematically review the performance of polymerase chain reaction (PCR) analysis of the amniotic fluid of pregnant women with recent serological toxoplasmosis diagnoses for the diagnosis of fetal toxoplasmosis. A systematic literature review was conducted via a search of electronic databases; the literature included primary studies of the diagnostic accuracy of PCR analysis of amniotic fluid from pregnant women who seroconverted during pregnancy. The PCR test was compared to a gold standard for diagnosis. A total of 1.269 summaries were obtained from the electronic database and reviewed, and 20 studies, comprising 4.171 samples, met the established inclusion criteria and were included in the review. The following results were obtained: studies about PCR assays for fetal toxoplasmosis are generally susceptible to bias; reports of the tests' use lack critical information; the protocols varied among studies; the heterogeneity among studies was concentrated in the tests' sensitivity; there was evidence that the sensitivity of the tests increases with time, as represented by the trimester; and there was more heterogeneity among studies in which there was more time between maternal diagnosis and fetal testing. The sensitivity of the method, if performed up to five weeks after maternal diagnosis, was 87% and specificity was 99%. The global sensitivity heterogeneity of the PCR test in this review was 66.5% (I(2)). The tests show low evidence of heterogeneity with a sensitivity of 87% and specificity of 99% when performed up to five weeks after maternal diagnosis. The test has a known performance and could be recommended for use up to five weeks after maternal diagnosis, when there is suspicion of fetal toxoplasmosis.

  17. Performance of Polymerase Chain Reaction Analysis of the Amniotic Fluid of Pregnant Women for Diagnosis of Congenital Toxoplasmosis: A Systematic Review and Meta-Analysis

    PubMed Central

    2016-01-01

    Introduction Congenital infection caused by Toxoplasma gondii can cause serious damage that can be diagnosed in utero or at birth, although most infants are asymptomatic at birth. Prenatal diagnosis of congenital toxoplasmosis considerably improves the prognosis and outcome for infected infants. For this reason, an assay for the quick, sensitive, and safe diagnosis of fetal toxoplasmosis is desirable. Goal To systematically review the performance of polymerase chain reaction (PCR) analysis of the amniotic fluid of pregnant women with recent serological toxoplasmosis diagnoses for the diagnosis of fetal toxoplasmosis. Method A systematic literature review was conducted via a search of electronic databases; the literature included primary studies of the diagnostic accuracy of PCR analysis of amniotic fluid from pregnant women who seroconverted during pregnancy. The PCR test was compared to a gold standard for diagnosis. Results A total of 1.269 summaries were obtained from the electronic database and reviewed, and 20 studies, comprising 4.171 samples, met the established inclusion criteria and were included in the review. The following results were obtained: studies about PCR assays for fetal toxoplasmosis are generally susceptible to bias; reports of the tests’ use lack critical information; the protocols varied among studies; the heterogeneity among studies was concentrated in the tests’ sensitivity; there was evidence that the sensitivity of the tests increases with time, as represented by the trimester; and there was more heterogeneity among studies in which there was more time between maternal diagnosis and fetal testing. The sensitivity of the method, if performed up to five weeks after maternal diagnosis, was 87% and specificity was 99%. Conclusion The global sensitivity heterogeneity of the PCR test in this review was 66.5% (I2). The tests show low evidence of heterogeneity with a sensitivity of 87% and specificity of 99% when performed up to five weeks after maternal diagnosis. The test has a known performance and could be recommended for use up to five weeks after maternal diagnosis, when there is suspicion of fetal toxoplasmosis. PMID:27055272

  18. Improved computer-aided detection of small polyps in CT colonography using interpolation for curvature estimationa

    PubMed Central

    Liu, Jiamin; Kabadi, Suraj; Van Uitert, Robert; Petrick, Nicholas; Deriche, Rachid; Summers, Ronald M.

    2011-01-01

    Purpose: Surface curvatures are important geometric features for the computer-aided analysis and detection of polyps in CT colonography (CTC). However, the general kernel approach for curvature computation can yield erroneous results for small polyps and for polyps that lie on haustral folds. Those erroneous curvatures will reduce the performance of polyp detection. This paper presents an analysis of interpolation’s effect on curvature estimation for thin structures and its application on computer-aided detection of small polyps in CTC. Methods: The authors demonstrated that a simple technique, image interpolation, can improve the accuracy of curvature estimation for thin structures and thus significantly improve the sensitivity of small polyp detection in CTC. Results: Our experiments showed that the merits of interpolating included more accurate curvature values for simulated data, and isolation of polyps near folds for clinical data. After testing on a large clinical data set, it was observed that sensitivities with linear, quadratic B-spline and cubic B-spline interpolations significantly improved the sensitivity for small polyp detection. Conclusions: The image interpolation can improve the accuracy of curvature estimation for thin structures and thus improve the computer-aided detection of small polyps in CTC. PMID:21859029

  19. Ultrasonic monitoring of droplets' evaporation: Application to human whole blood.

    PubMed

    Laux, D; Ferrandis, J Y; Brutin, D

    2016-09-01

    During a colloidal droplet evaporation, a sol-gel transition can be observed and is described by the desiccation time τD and the gelation time τG. These characteristic times, which can be linked to viscoelastic properties of the droplet and to its composition, are classically rated by analysis of mass droplet evolution during evaporation. Even if monitoring mass evolution versus time seems straightforward, this approach is very sensitive to environmental conditions (vibrations, air flow…) as mass has to be evaluated very accurately using ultra-sensitive weighing scales. In this study we investigated the potentialities of ultrasonic shear reflectometry to assess τD and τG in a simple and reliable manner. In order to validate this approach, our study has focused on blood droplets evaporation on which a great deal of work has recently been published. Desiccation and gelation times measured with shear ultrasonic reflectometry have been perfectly correlated to values obtained from mass versus time analysis. This ultrasonic method which is not very sensitive to environmental perturbations is therefore very interesting to monitor the drying of blood droplets in a simple manner and is more generally suitable for complex fluid droplets evaporation investigation. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Testing Relativity with Electrodynamics

    NASA Astrophysics Data System (ADS)

    Bailey, Quentin; Kostelecky, Alan

    2004-04-01

    Lorentz and CPT violation is a promising candidate signal for Planck-scale physics. Low-energy effects of Lorentz and CPT violation are described by the general theoretical framework called the Standard-Model Extension (SME). This talk focuses on Lorentz-violating effects arising in the classical electrodynamics limit of the SME. Analysis of the theory shows that suitable experiments could improve by several orders of magnitude certain sensitivities achieved in modern Michelson-Morley and Kennedy-Thorndike tests.

  1. Tests of Lorentz Symmetry with Electrodynamics

    NASA Astrophysics Data System (ADS)

    Bailey, Quentin; Kostelecky, Alan

    2004-05-01

    Lorentz and CPT violation is a promising candidate signal for Planck-scale physics. Low-energy effects of Lorentz and CPT violation are described by the general theoretical framework called the Standard-Model Extension (SME). This talk focuses on Lorentz-violating effects arising in the limit of classical electrodynamics. Analysis of the theory shows that suitable experiments could improve by several orders of magnitude on the sensitivities achieved in modern Michelson-Morley and Kennedy-Thorndike tests.

  2. A robust control scheme for flexible arms with friction in the joints

    NASA Technical Reports Server (NTRS)

    Rattan, Kuldip S.; Feliu, Vicente; Brown, H. Benjamin, Jr.

    1988-01-01

    A general control scheme to control flexible arms with friction in the joints is proposed in this paper. This scheme presents the advantage of being robust in the sense that it minimizes the effects of the Coulomb friction existing in the motor and the effects of changes in the dynamic friction coefficient. A justification of the robustness properties of the scheme is given in terms of the sensitivity analysis.

  3. Diagnosis of rheumatoid arthritis: multivariate analysis of biomarkers.

    PubMed

    Wild, Norbert; Karl, Johann; Grunert, Veit P; Schmitt, Raluca I; Garczarek, Ursula; Krause, Friedemann; Hasler, Fritz; van Riel, Piet L C M; Bayer, Peter M; Thun, Matthias; Mattey, Derek L; Sharif, Mohammed; Zolg, Werner

    2008-02-01

    To test if a combination of biomarkers can increase the classification power of autoantibodies to cyclic citrullinated peptides (anti-CCP) in the diagnosis of rheumatoid arthritis (RA) depending on the diagnostic situation. Biomarkers were subject to three inclusion/exclusion criteria (discrimination between RA patients and healthy blood donors, ability to identify anti-CCP-negative RA patients, specificity in a panel with major non-rheumatological diseases) before univariate ranking and multivariate analysis was carried out using a modelling panel (n = 906). To enable the evaluation of the classification power in different diagnostic settings the disease controls (n = 542) were weighted according to the admission rates in rheumatology clinics modelling a clinic panel or according to the relative prevalences of musculoskeletal disorders in the general population seen by general practitioners modelling a GP panel. Out of 131 biomarkers considered originally, we evaluated 32 biomarkers in this study, of which only seven passed the three inclusion/exclusion criteria and were combined by multivariate analysis using four different mathematical models. In the modelled clinic panel, anti-CCP was the lead marker with a sensitivity of 75.8% and a specificity of 94.0%. Due to the lack in specificity of the markers other than anti-CCP in this diagnostic setting, any gain in sensitivity by any marker combination is off-set by a corresponding loss in specificity. In the modelled GP panel, the best marker combination of anti-CCP and interleukin (IL)-6 resulted in a sensitivity gain of 7.6% (85.9% vs. 78.3%) at a minor loss in specificity of 1.6% (90.3% vs. 91.9%) compared with anti-CCP as the best single marker. Depending on the composition of the sample panel, anti-CCP alone or anti-CCP in combination with IL-6 has the highest classification power for the diagnosis of established RA.

  4. Ultra-short screening instruments for major depressive episode and generalized anxiety disorder in epilepsy: The NDDIE-2 and the GAD-SI.

    PubMed

    Micoulaud-Franchi, Jean-Arthur; Bartolomei, Fabrice; McGonigal, Aileen

    2017-03-01

    Systematic screening is recommended for major depressive episode (MDE) with the Neurological Disorders Depression Inventory for Epilepsy NDDI-E, 6 items and generalized anxiety disorder (GAD) with the GAD 7 items in patients with epilepsy (PWE). Shorter versions of the NDDI-E and the GAD-7 could facilitate increased screening by busy clinicians and be more accessible to patients with mild cognitive and/or language impairments. The effectiveness of ultra-short versions of the NDDI-E (2 items) and the GAD-7 (the GAD-2, 2 items, and the GAD-SI with a single item) in comparison with the original versions were statistically tested using ROC analysis. ROC analysis of the NDDIE-2 showed an AUC of 0.926 (p<0.001), a sensitivity of 81.82% and a specificity of 89.16%, without significant difference with the NDDI-E (z=1.582, p=0.11). ROC analysis of the GAD-SI showed an AUC of 0.872 (p<0.001), a sensitivity of 83.67% and a specificity of 82.29%, without significant difference with the GAD-7 (z=1.281, p=0.2). The GAD-2 showed poorer psychometric properties. The limitation is the use of data from previously reported subjects in a single language version, the NDDIE-2 that lacks detection of dysphoric symptoms in comparison with the NDDIE-6 and the GAD-SI that exhibited a more than 10% lower sensitivity than the GAD-7. This study highlights the potential utility of the NDDIE-2 and the GAD-SI as ultra-short screening tools for MDE and GAD respectively in PWE. Further studies in a larger population, including multi-lingual versions, could be a valuable next step. However, the brevity and simplicity of this tool could be an advantage in PWE who present cognitive difficulties, especially attentional or language deficits. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Study on a novel panel support concept for radio telescopes with active surface

    NASA Astrophysics Data System (ADS)

    Yang, Dehua; Zhou, Guohua; Okoh, Daniel; Li, Guoping; Cheng, Jingquan

    2010-07-01

    Generally, panels of radio telescopes are mainly shaped in trapezoid and each is supported/positioned by four adjustors beneath its vertexes. Such configuration of panel supporting system is essentially hyper-static, and the panel is overconstrained from a kinematic point of view. When the panel is to be adjusted and/or actuated, it will suffer stress from its adjusters and hence its shape is to be distorted. This situation is not desirable for high precision panels, such as glass based panels especially used for sub-millimeter and shorter wavelength telescopes with active optics/active panel technology. This paper began with a general overview of panel patterns and panel supports of existing radio telescopes. Thereby, we proposed a preferable master-slave active surface concept for triangular and/or hexagonal panel pattern. In addition, we carry out panel error sensitivity analysis for all the 6 degrees of freedom (DOF) of a panel to identify what DOFs are most sensitive for an active surface. And afterwards, based on the error sensitivity analysis, we suggested an innovative parallel-series concept hexapod well fitted for an active panel to correct for all of its 6 rigid errors. A demonstration active surface using the master-slave concept and the hexapod manifested a great save in cost, where only 486 precision actuators are needed for 438 panels, which is 37% of those actuators needed by classic segmented mirror active optics. Further, we put forward a swaying-arm based design concept for the related connecting joints between panels, which ensures that all the panels attached on to it free from over-constraints when they are positioned and/or actuated. Principle and performance of the swaying-arm connecting mechanism are elaborated before a practical cablemesh based prototype active surface is presented with comprehensive finite element analysis and simulation.

  6. SSTO vs TSTO design considerations—an assessment of the overall performance, design considerations, technologies, costs, and sensitivities of SSTO and TSTO designs using modern technologies

    NASA Astrophysics Data System (ADS)

    Penn, Jay P.

    1996-03-01

    It is generally believed by those skilled in launch system design that Single-Stage-To-Orbit (SSTO) designs are more technically challenging, more performance sensitive, and yield larger lift-off weights than do Two-Stage-To-Orbit designs (TSTO's) offering similar payload delivery capability. Without additional insight into the other considerations which drive the development, recurring costs, operability, and reliability of a launch fleet, an analyst may easily conclude that the higher performing, less sensitive TSTO designs, thus yield a better solution to achieving low cost payload delivery. This limited insight could justify an argument to eliminate the X-33 SSTO technology/demonstration development effort, and thus proceed directly to less risky TSTO designs. Insight into real world design considerations of launch vehicles makes the choice of SSTO vs TSTO much less clear. The presentation addresses a more comprehensive evaluation of the general class of SSTO and TSTO concepts. These include pure SSTO's, augmented SSTO's, Siamese Twin, and Pure TSTO designs. The assessment considers vehicle performance and scaling relationships which characterize real vehicle designs. The assessment also addresses technology requirements, operations and supportability, cost implications, and sensitivities. Results of the assessment indicate that the trade space between various SSTO and TSTO design approaches is complex and not yet fully understood. The results of the X-33 technology demonstrators, as well as additional parametric analysis is required to better define the relative performance and costs of the various design approaches. The results also indicate that with modern technologies and today's better understanding of vehicle design considerations, the perception that SSTO's are dramatically heavier and more sensitive than TSTO designs is more of a myth, than reality.

  7. Capillary moving-boundary isotachophoresis with electrospray ionization mass-spectrometric detection and hydrogen ion used as essential terminator: Methodology for sensitive analysis of hydroxyderivatives of s-triazine herbicides in waters.

    PubMed

    Malá, Zdena; Gebauer, Petr

    2017-10-06

    Capillary isotachophoresis (ITP) is an electrophoretic technique offering high sensitivity due to permanent stacking of the migrating analytes. Its combination with electrospray-ionization mass-spectrometric (ESI-MS) detection is limited by the narrow spectrum of ESI-compatible components but can be compensated by experienced system architecture. This work describes a methodology for sensitive analysis of hydroxyderivatives of s-triazine herbicides, based on implementation of the concepts of moving-boundary isotachophoresis and of H + as essential terminating component into cationic ITP with ESI-MS detection. Theoretical description of such kind of system is given and equations for zone-related boundary mobilities are derived, resulting in a much more general definition of the effective mobility of the terminating H + zone than used so far. Explicit equations allowing direct calculation for selected simple systems are derived. The presented theory allows prediction of stacking properties of particular systems and easy selection of suitable electrolyte setups. A simple ESI-compatible system composed of acetic acid and ammonium with H + and ammonium as a mixed terminator was selected for the analysis of 2-hydroxyatrazine and 2-hydroxyterbutylazine, degradation products of s-triazine herbicides. The proposed method was tested with direct injection without any sample pretreatment and provided excellent linearity and high sensitivity with limits of detection below 100ng/L (0.5nM). Example analyses of unspiked and spiked drinking and river water are shown. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Electrochemical Quartz Crystal Nanobalance (EQCN) Based Biosensor for Sensitive Detection of Antibiotic Residues in Milk.

    PubMed

    Bhand, Sunil; Mishra, Geetesh K

    2017-01-01

    An electrochemical quartz crystal nanobalance (EQCN), which provides real-time analysis of dynamic surface events, is a valuable tool for analyzing biomolecular interactions. EQCN biosensors are based on mass-sensitive measurements that can detect small mass changes caused by chemical binding to small piezoelectric crystals. Among the various biosensors, the piezoelectric biosensor is considered one of the most sensitive analytical techniques, capable of detecting antigens at picogram levels. EQCN is an effective monitoring technique for regulation of the antibiotics below the maximum residual limit (MRL). The analysis of antibiotic residues requires high sensitivity, rapidity, reliability and cost effectiveness. For analytical purposes the general approach is to take advantage of the piezoelectric effect by immobilizing a biosensing layer on top of the piezoelectric crystal. The sensing layer usually comprises a biological material such as an antibody, enzymes, or aptamers having high specificity and selectivity for the target molecule to be detected. The biosensing layer is usually functionalized using surface chemistry modifications. When these bio-functionalized quartz crystals are exposed to a particular substance of interest (e.g., a substrate, inhibitor, antigen or protein), binding interaction occurs. This causes a frequency or mass change that can be used to determine the amount of material interacted or bound. EQCN biosensors can easily be automated by using a flow injection analysis (FIA) setup coupled through automated pumps and injection valves. Such FIA-EQCN biosensors have great potential for the detection of different analytes such as antibiotic residues in various matrices such as water, waste water, and milk.

  9. Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes

    NASA Astrophysics Data System (ADS)

    Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris

    2017-12-01

    Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.

  10. A novel bi-level meta-analysis approach: applied to biological pathway analysis.

    PubMed

    Nguyen, Tin; Tagett, Rebecca; Donato, Michele; Mitrea, Cristina; Draghici, Sorin

    2016-02-01

    The accumulation of high-throughput data in public repositories creates a pressing need for integrative analysis of multiple datasets from independent experiments. However, study heterogeneity, study bias, outliers and the lack of power of available methods present real challenge in integrating genomic data. One practical drawback of many P-value-based meta-analysis methods, including Fisher's, Stouffer's, minP and maxP, is that they are sensitive to outliers. Another drawback is that, because they perform just one statistical test for each individual experiment, they may not fully exploit the potentially large number of samples within each study. We propose a novel bi-level meta-analysis approach that employs the additive method and the Central Limit Theorem within each individual experiment and also across multiple experiments. We prove that the bi-level framework is robust against bias, less sensitive to outliers than other methods, and more sensitive to small changes in signal. For comparative analysis, we demonstrate that the intra-experiment analysis has more power than the equivalent statistical test performed on a single large experiment. For pathway analysis, we compare the proposed framework versus classical meta-analysis approaches (Fisher's, Stouffer's and the additive method) as well as against a dedicated pathway meta-analysis package (MetaPath), using 1252 samples from 21 datasets related to three human diseases, acute myeloid leukemia (9 datasets), type II diabetes (5 datasets) and Alzheimer's disease (7 datasets). Our framework outperforms its competitors to correctly identify pathways relevant to the phenotypes. The framework is sufficiently general to be applied to any type of statistical meta-analysis. The R scripts are available on demand from the authors. sorin@wayne.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Statistical analysis and handling of missing data in cluster randomized trials: a systematic review.

    PubMed

    Fiero, Mallorie H; Huang, Shuang; Oren, Eyal; Bell, Melanie L

    2016-02-09

    Cluster randomized trials (CRTs) randomize participants in groups, rather than as individuals and are key tools used to assess interventions in health research where treatment contamination is likely or if individual randomization is not feasible. Two potential major pitfalls exist regarding CRTs, namely handling missing data and not accounting for clustering in the primary analysis. The aim of this review was to evaluate approaches for handling missing data and statistical analysis with respect to the primary outcome in CRTs. We systematically searched for CRTs published between August 2013 and July 2014 using PubMed, Web of Science, and PsycINFO. For each trial, two independent reviewers assessed the extent of the missing data and method(s) used for handling missing data in the primary and sensitivity analyses. We evaluated the primary analysis and determined whether it was at the cluster or individual level. Of the 86 included CRTs, 80 (93%) trials reported some missing outcome data. Of those reporting missing data, the median percent of individuals with a missing outcome was 19% (range 0.5 to 90%). The most common way to handle missing data in the primary analysis was complete case analysis (44, 55%), whereas 18 (22%) used mixed models, six (8%) used single imputation, four (5%) used unweighted generalized estimating equations, and two (2%) used multiple imputation. Fourteen (16%) trials reported a sensitivity analysis for missing data, but most assumed the same missing data mechanism as in the primary analysis. Overall, 67 (78%) trials accounted for clustering in the primary analysis. High rates of missing outcome data are present in the majority of CRTs, yet handling missing data in practice remains suboptimal. Researchers and applied statisticians should carry out appropriate missing data methods, which are valid under plausible assumptions in order to increase statistical power in trials and reduce the possibility of bias. Sensitivity analysis should be performed, with weakened assumptions regarding the missing data mechanism to explore the robustness of results reported in the primary analysis.

  12. Hypnotic hypersensitivity to volatile anesthetics and dexmedetomidine in dopamine β-hydroxylase knockout mice.

    PubMed

    Hu, Frances Y; Hanna, George M; Han, Wei; Mardini, Feras; Thomas, Steven A; Wyner, Abraham J; Kelz, Max B

    2012-11-01

    Multiple lines of evidence suggest that the adrenergic system can modulate sensitivity to anesthetic-induced immobility and anesthetic-induced hypnosis as well. However, several considerations prevent the conclusion that the endogenous adrenergic ligands norepinephrine and epinephrine alter anesthetic sensitivity. Using dopamine β-hydroxylase knockout (Dbh) mice genetically engineered to lack the adrenergic ligands and their siblings with normal adrenergic levels, we test the contribution of the adrenergic ligands upon volatile anesthetic induction and emergence. Moreover, we investigate the effects of intravenous dexmedetomidine in adrenergic-deficient mice and their siblings using both righting reflex and processed electroencephalographic measures of anesthetic hypnosis. We demonstrate that the loss of norepinephrine and epinephrine and not other neuromodulators co-packaged in adrenergic neurons is sufficient to cause hypersensitivity to induction of volatile anesthesia. However, the most profound effect of adrenergic deficiency is retarding emergence from anesthesia, which takes two to three times as long in Dbh mice for sevoflurane, isoflurane, and halothane. Having shown that Dbh mice are hypersensitive to volatile anesthetics, we further demonstrate that their hypnotic hypersensitivity persists at multiple doses of dexmedetomidine. Dbh mice exhibit up to 67% shorter latencies to loss of righting reflex and up to 545% longer durations of dexmedetomidine-induced general anesthesia. Central rescue of adrenergic signaling restores control-like dexmedetomidine sensitivity. A novel continuous electroencephalographic analysis illustrates that the longer duration of dexmedetomidine-induced hypnosis is not due to a motor confound, but occurs because of impaired anesthetic emergence. Adrenergic signaling is essential for normal emergence from general anesthesia. Dexmedetomidine-induced general anesthesia does not depend on inhibition of adrenergic neurotransmission.

  13. Anesthesia Technique and Outcomes of Mechanical Thrombectomy in Patients With Acute Ischemic Stroke.

    PubMed

    Bekelis, Kimon; Missios, Symeon; MacKenzie, Todd A; Tjoumakaris, Stavropoula; Jabbour, Pascal

    2017-02-01

    The impact of anesthesia technique on the outcomes of mechanical thrombectomy for acute ischemic stroke remains an issue of debate. We investigated the association of general anesthesia with outcomes in patients undergoing mechanical thrombectomy for ischemic stroke. We performed a cohort study involving patients undergoing mechanical thrombectomy for ischemic stroke from 2009 to 2013, who were registered in the New York Statewide Planning and Research Cooperative System database. An instrumental variable (hospital rate of general anesthesia) analysis was used to simulate the effects of randomization and investigate the association of anesthesia technique with case-fatality and length of stay. Among 1174 patients, 441 (37.6%) underwent general anesthesia and 733 (62.4%) underwent conscious sedation. Using an instrumental variable analysis, we identified that general anesthesia was associated with a 6.4% increased case-fatality (95% confidence interval, 1.9%-11.0%) and 8.4 days longer length of stay (95% confidence interval, 2.9-14.0) in comparison to conscious sedation. This corresponded to 15 patients needing to be treated with conscious sedation to prevent 1 death. Our results were robust in sensitivity analysis with mixed effects regression and propensity score-adjusted regression models. Using a comprehensive all-payer cohort of acute ischemic stroke patients undergoing mechanical thrombectomy in New York State, we identified an association of general anesthesia with increased case-fatality and length of stay. These considerations should be taken into account when standardizing acute stroke care. © 2017 American Heart Association, Inc.

  14. The Physiology and Proteomics of Drought Tolerance in Maize: Early Stomatal Closure as a Cause of Lower Tolerance to Short-Term Dehydration?

    PubMed Central

    Benešová, Monika; Holá, Dana; Fischer, Lukáš; Jedelský, Petr L.; Hnilička, František; Wilhelmová, Naďa; Rothová, Olga; Kočová, Marie; Procházková, Dagmar; Honnerová, Jana; Fridrichová, Lenka; Hniličková, Helena

    2012-01-01

    Understanding the response of a crop to drought is the first step in the breeding of tolerant genotypes. In our study, two maize (Zea mays L.) genotypes with contrasting sensitivity to dehydration were subjected to moderate drought conditions. The subsequent analysis of their physiological parameters revealed a decreased stomatal conductance accompanied by a slighter decrease in the relative water content in the sensitive genotype. In contrast, the tolerant genotype maintained open stomata and active photosynthesis, even under dehydration conditions. Drought-induced changes in the leaf proteome were analyzed by two independent approaches, 2D gel electrophoresis and iTRAQ analysis, which provided compatible but only partially overlapping results. Drought caused the up-regulation of protective and stress-related proteins (mainly chaperones and dehydrins) in both genotypes. The differences in the levels of various detoxification proteins corresponded well with the observed changes in the activities of antioxidant enzymes. The number and levels of up-regulated protective proteins were generally lower in the sensitive genotype, implying a reduced level of proteosynthesis, which was also indicated by specific changes in the components of the translation machinery. Based on these results, we propose that the hypersensitive early stomatal closure in the sensitive genotype leads to the inhibition of photosynthesis and, subsequently, to a less efficient synthesis of the protective/detoxification proteins that are associated with drought tolerance. PMID:22719860

  15. The physiology and proteomics of drought tolerance in maize: early stomatal closure as a cause of lower tolerance to short-term dehydration?

    PubMed

    Benešová, Monika; Holá, Dana; Fischer, Lukáš; Jedelský, Petr L; Hnilička, František; Wilhelmová, Naďa; Rothová, Olga; Kočová, Marie; Procházková, Dagmar; Honnerová, Jana; Fridrichová, Lenka; Hniličková, Helena

    2012-01-01

    Understanding the response of a crop to drought is the first step in the breeding of tolerant genotypes. In our study, two maize (Zea mays L.) genotypes with contrasting sensitivity to dehydration were subjected to moderate drought conditions. The subsequent analysis of their physiological parameters revealed a decreased stomatal conductance accompanied by a slighter decrease in the relative water content in the sensitive genotype. In contrast, the tolerant genotype maintained open stomata and active photosynthesis, even under dehydration conditions. Drought-induced changes in the leaf proteome were analyzed by two independent approaches, 2D gel electrophoresis and iTRAQ analysis, which provided compatible but only partially overlapping results. Drought caused the up-regulation of protective and stress-related proteins (mainly chaperones and dehydrins) in both genotypes. The differences in the levels of various detoxification proteins corresponded well with the observed changes in the activities of antioxidant enzymes. The number and levels of up-regulated protective proteins were generally lower in the sensitive genotype, implying a reduced level of proteosynthesis, which was also indicated by specific changes in the components of the translation machinery. Based on these results, we propose that the hypersensitive early stomatal closure in the sensitive genotype leads to the inhibition of photosynthesis and, subsequently, to a less efficient synthesis of the protective/detoxification proteins that are associated with drought tolerance.

  16. A Case Study for Probabilistic Methods Validation (MSFC Center Director's Discretionary Fund, Project No. 94-26)

    NASA Technical Reports Server (NTRS)

    Price J. M.; Ortega, R.

    1998-01-01

    Probabilistic method is not a universally accepted approach for the design and analysis of aerospace structures. The validity of this approach must be demonstrated to encourage its acceptance as it viable design and analysis tool to estimate structural reliability. The objective of this Study is to develop a well characterized finite population of similar aerospace structures that can be used to (1) validate probabilistic codes, (2) demonstrate the basic principles behind probabilistic methods, (3) formulate general guidelines for characterization of material drivers (such as elastic modulus) when limited data is available, and (4) investigate how the drivers affect the results of sensitivity analysis at the component/failure mode level.

  17. Learning to Detect Triggers of Airway Symptoms: The Role of Illness Beliefs, Conceptual Categories and Actual Experience with Allergic Symptoms

    PubMed Central

    Janssens, Thomas; Caris, Eva; Van Diest, Ilse; Van den Bergh, Omer

    2017-01-01

    Background: In asthma and allergic rhinitis, beliefs about what triggers allergic reactions often do not match objective allergy tests. This may be due to insensitivity for expectancy violations as a result of holding trigger beliefs based on conceptual relationships among triggers. In this laboratory experiment, we aimed to investigate how pre-existing beliefs and conceptual relationships among triggers interact with actual experience when learning differential symptom expectations. Methods: Healthy participants (N = 48) received information that allergic reactions were a result of specific sensitivities versus general allergic vulnerability. Next, they performed a trigger learning task using a differential conditioning paradigm: brief inhalation of CO2 enriched air was used to induce symptoms, while participants were led to believe that the symptoms came about as a result of inhaled allergens (conditioned stimuli, CS’s; CS+ followed by symptoms, CS- not followed by symptoms). CS+ and CS- stimuli either shared (e.g., birds-mammals) or did not share (e.g. birds-fungi) category membership. During Acquisition, participants reported symptom expectancy and symptom intensity for all triggers. During a Test 1 day later, participants rated symptom expectancies for old CS+/CS- triggers, for novel triggers within categories, and for exemplars of novel trigger categories. Data were analyzed using multilevel models. Findings: Only a subgroup of participants (n = 22) showed differences between CO2 and room air symptoms. In this group of responders, analysis of symptom expectancies during acquisition did not result in significant differential symptom CS+/CS- acquisition. A retention test 1 day later showed differential CS+/CS- symptom expectancies: When CS categories did not share category membership, specific sensitivity beliefs improved retention of CS+/CS- differentiation. However, when CS categories shared category membership, general vulnerability beliefs improved retention of CS+/CS- differentiation. Furthermore, participants showed some selectivity in generalization of symptom expectancies to novel categories, as symptom expectancies did not generalize to novel categories that were unrelated to CS+ or CS- categories. Generalization to novel categories was not affected by information about general vulnerability or specific sensitivities. Discussion: Pre-existing vulnerability beliefs and conceptual relationships between trigger categories influence differential symptom expectancies to allergic triggers. PMID:28638358

  18. Elucidating the relationship between noise sensitivity and personality.

    PubMed

    Shepherd, Daniel; Heinonen-Guzejev, Marja; Hautus, Michael J; Heikkilä, Kauko

    2015-01-01

    Sensitivity to unwanted sounds is common in general and clinical populations. Noise sensitivity refers to physiological and psychological internal states of an individual that increase the degree of reactivity to noise in general. The current study investigated the relationship between the Big Five personality dimensions and noise sensitivity using the 240-item NEO Personality Inventory (NEO-PI) and 35-item The Noise-Sensitivity-Questionnaire (NoiSeQ) scales, respectively. Overall, the Big Five accounted for 33% of the variance in noise sensitivity, with the Introversion-Extroversion dimension explaining the most variability. Furthermore, the Big Five personality dimensions (neuroticism, extroversion, openness, agreeableness, and conscientiousness) had an independent effect on noise sensitivity, which were linear. However, additional analyses indicated that the influence of gender and age must be considered when examining the relationship between personality and noise sensitivity. The findings caution against pooling data across genders, not controlling for age, and using personality dimensions in isolation.

  19. Elucidating the relationship between noise sensitivity and personality

    PubMed Central

    Shepherd, Daniel; Heinonen-Guzejev, Marja; Hautus, Michael J.; Heikkilä, Kauko

    2015-01-01

    Sensitivity to unwanted sounds is common in general and clinical populations. Noise sensitivity refers to physiological and psychological internal states of an individual that increase the degree of reactivity to noise in general. The current study investigated the relationship between the Big Five personality dimensions and noise sensitivity using the 240-item NEO Personality Inventory (NEO-PI) and 35-item The Noise-Sensitivity-Questionnaire (NoiSeQ) scales, respectively. Overall, the Big Five accounted for 33% of the variance in noise sensitivity, with the Introversion-Extroversion dimension explaining the most variability. Furthermore, the Big Five personality dimensions (neuroticism, extroversion, openness, agreeableness, and conscientiousness) had an independent effect on noise sensitivity, which were linear. However, additional analyses indicated that the influence of gender and age must be considered when examining the relationship between personality and noise sensitivity. The findings caution against pooling data across genders, not controlling for age, and using personality dimensions in isolation. PMID:25913556

  20. Prevalence of food sensitization and probable food allergy among adults in India: the EuroPrevall INCO study.

    PubMed

    Mahesh, P A; Wong, Gary W K; Ogorodova, L; Potts, J; Leung, T F; Fedorova, O; Holla, Amrutha D; Fernandez-Rivas, M; Clare Mills, E N; Kummeling, I; Versteeg, S A; van Ree, R; Yazdanbakhsh, M; Burney, P

    2016-07-01

    Data are lacking regarding the prevalence of food sensitization and probable food allergy among general population in India. We report the prevalence of sensitization and probable food allergy to 24 common foods among adults from general population in Karnataka, South India. The study was conducted in two stages: a screening study and a case-control study. A total of 11 791 adults in age group 20-54 were randomly sampled from general population in South India and answered a screening questionnaire. A total of 588 subjects (236 cases and 352 controls) participated in the case-control study involving a detailed questionnaire and specific IgE estimation for 24 common foods. A high level of sensitization (26.5%) was observed for most of the foods in the general population, higher than that observed among adults in Europe, except for those foods that cross-react with birch pollen. Most of the sensitization was observed in subjects who had total IgE above the median IgE level. A high level of cross-reactivity was observed among different pollens and foods and among foods. The prevalence of probable food allergy (self-reports of adverse symptoms after the consumption of food and specific IgE to the same food) was 1.2%, which was mainly accounted for cow's milk (0.5%) and apple (0.5%). Very high levels of sensitization were observed for most foods, including those not commonly consumed in the general population. For the levels of sensitization, the prevalence of probable food allergy was low. This disassociation needs to be further explored in future studies. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Interpersonal sensitivity mediates the effects of child abuse and affective temperaments on depressive symptoms in the general adult population.

    PubMed

    Otsuka, Ayano; Takaesu, Yoshikazu; Sato, Mitsuhiko; Masuya, Jiro; Ichiki, Masahiko; Kusumi, Ichiro; Inoue, Takeshi

    2017-01-01

    Recent studies have suggested that multiple factors interact with the onset and prognosis of major depressive disorders. In this study, we investigated how child abuse, affective temperaments, and interpersonal sensitivity are interrelated, and how they affect depressive symptoms in the general adult population. A total of 415 volunteers from the general adult population completed the Patient Health Questionnaire-9, the Temperament Evaluation of Memphis, Pisa, Paris, and San Diego-Autoquestionnaire version, the Child Abuse and Trauma Scale, and the Interpersonal Sensitivity Measure, which are all self-administered questionnaires. Data were subjected to structural equation modeling (Mplus), and single and multiple regression analyses. The effect of child abuse on depressive symptoms was mediated by interpersonal sensitivity and 4 affective temperaments, including depressive, cyclothymic, anxious, and irritable temperaments. In addition, the effect of these temperaments on depressive symptoms was mediated by interpersonal sensitivity, indicating the indirect enhancement of depressive symptoms. In contrast to these 4 temperaments, the hyperthymic temperament did not mediate the effect of child abuse on depressive symptoms; its effect was not mediated by interpersonal sensitivity. However, a greater hyperthymic temperament predicted decreased depressive symptoms and interpersonal sensitivity, independent of any mediation effect. Because this is a cross-sectional study, long-term prospective studies are necessary to confirm its findings. Therefore, recall bias should be considered when interpreting the results. As the subjects were adults from the general population, the results may not be generalizable towards all patients with major depression. This study suggests that child abuse and affective temperaments affect depressive symptoms partly through interpersonal sensitivity. Interpersonal sensitivity may have a major role in forming the link between abuse, affective temperament, and depression.

  2. The Robustness of Plant-Pollinator Assemblages: Linking Plant Interaction Patterns and Sensitivity to Pollinator Loss

    PubMed Central

    Astegiano, Julia; Massol, François; Vidal, Mariana Morais; Cheptou, Pierre-Olivier; Guimarães, Paulo R.

    2015-01-01

    Most flowering plants depend on pollinators to reproduce. Thus, evaluating the robustness of plant-pollinator assemblages to species loss is a major concern. How species interaction patterns are related to species sensitivity to partner loss may influence the robustness of plant-pollinator assemblages. In plants, both reproductive dependence on pollinators (breeding system) and dispersal ability may modulate plant sensitivity to pollinator loss. For instance, species with strong dependence (e.g. dioecious species) and low dispersal (e.g. seeds dispersed by gravity) may be the most sensitive to pollinator loss. We compared the interaction patterns of plants differing in dependence on pollinators and dispersal ability in a meta-dataset comprising 192 plant species from 13 plant-pollinator networks. In addition, network robustness was compared under different scenarios representing sequences of plant extinctions associated with plant sensitivity to pollinator loss. Species with different dependence on pollinators and dispersal ability showed similar levels of generalization. Although plants with low dispersal ability interacted with more generalized pollinators, low-dispersal plants with strong dependence on pollinators (i.e. the most sensitive to pollinator loss) interacted with more particular sets of pollinators (i.e. shared a low proportion of pollinators with other plants). Only two assemblages showed lower robustness under the scenario considering plant generalization, dependence on pollinators and dispersal ability than under the scenario where extinction sequences only depended on plant generalization (i.e. where higher generalization level was associated with lower probability of extinction). Overall, our results support the idea that species generalization and network topology may be good predictors of assemblage robustness to species loss, independently of plant dispersal ability and breeding system. In contrast, since ecological specialization among partners may increase the probability of disruption of interactions, the fact that the plants most sensitive to pollinator loss interacted with more particular pollinator assemblages suggest that the persistence of these plants and their pollinators might be highly compromised. PMID:25646762

  3. Monochromatic Measurements of the JPSS-1 VIIRS Polarization Sensitivity

    NASA Technical Reports Server (NTRS)

    McIntire, Jeff; Moyer, David; Brown, Steven W.; Lykke, Keith R.; Waluschka, Eugene; Oudrari, Hassan; Xiong, Xiaoxiong

    2016-01-01

    Polarization sensitivity is a critical property that must be characterized for spaceborne remote sensing instruments designed to measure reflected solar radiation. Broadband testing of the first Joint Polar-orbiting Satellite System (JPSS-1) Visible Infrared Imaging Radiometer Suite (VIIRS) showed unexpectedly large polarization sensitivities for the bluest bands on VIIRS (centered between 400 and 600 nm). Subsequent ray trace modeling indicated that large diattenuation on the edges of the bandpass for these spectral bands was the driver behind these large sensitivities. Additional testing using the National Institute of Standards and Technologies Traveling Spectral Irradiance and Radiance Responsivity Calibrations Using Uniform Sources was added to the test program to verify and enhance the model. The testing was limited in scope to two spectral bands at two scan angles; nonetheless, this additional testing provided valuable insight into the polarization sensitivity. Analysis has shown that the derived diattenuation agreed with the broadband measurements to within an absolute difference of about0.4 and that the ray trace model reproduced the general features of the measured data. Additionally, by deriving the spectral responsivity, the linear diattenuation is shown to be explicitly dependent on the changes in bandwidth with polarization state.

  4. Behavioral profiles of feline breeds in Japan.

    PubMed

    Takeuchi, Yukari; Mori, Yuji

    2009-08-01

    To clarify the behavioral profiles of 9 feline purebreds, 2 Persian subbreeds and the Japanese domestic cat, a questionnaire survey was distributed to 67 small-animal veterinarians. We found significant differences among breeds in all behavioral traits examined except for "inappropriate elimination". In addition, sexual differences were observed in certain behaviors, including "aggression toward cats", "general activity", "novelty-seeking", and "excitability". These behaviors were more common in males than females, whereas "nervousness" and "inappropriate elimination" were rated higher in females. When all breeds were categorized into four groups on the basis of a cluster analysis using the scores of two behavioral trait factors called "aggressiveness/sensitivity" and "vivaciousness", the group including Abyssinian, Russian Blue, Somali, Siamese, and Chinchilla breeds showed high aggressiveness/sensitivity and low vivaciousness. In contrast, the group including the American Shorthair and Japanese domestic cat displayed low aggressiveness/sensitivity and high vivaciousness, and the Himalayan and Persian group showed mild aggressiveness/sensitivity and very low vivaciousness. Finally, the group containing Maine Coon, Ragdoll, and Scottish Fold breeds displayed very low aggressiveness/sensitivity and low vivaciousness. The present results demonstrate that some feline behavioral traits vary by breed and/or sex.

  5. Noise sensitivity: Symptoms, health status, illness behavior and co-occurring environmental sensitivities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baliatsas, Christos, E-mail: c.baliatsas@nivel.nl

    Epidemiological evidence on the symptomatic profile, health status and illness behavior of people with subjective sensitivity to noise is still scarce. Also, it is unknown to what extent noise sensitivity co-occurs with other environmental sensitivities such as multi-chemical sensitivity and sensitivity to electromagnetic fields (EMF). A cross-sectional study performed in the Netherlands, combining self-administered questionnaires and electronic medical records of non-specific symptoms (NSS) registered by general practitioners (GP) allowed us to explore this further. The study sample consisted of 5806 participants, drawn from 21 general practices. Among participants, 722 (12.5%) responded “absolutely agree” to the statement “I am sensitive tomore » noise”, comprising the high noise-sensitive (HNS) group. Compared to the rest of the sample, people in the HNS group reported significantly higher scores on number and duration of self-reported NSS, increased psychological distress, decreased sleep quality and general health, more negative symptom perceptions and higher prevalence of healthcare contacts, GP-registered NSS and prescriptions for antidepressants and benzodiazepines. These results remained robust after adjustment for demographic, residential and lifestyle characteristics, objectively measured nocturnal noise exposure from road-traffic and GP-registered morbidity. Co-occurrence rates with other environmental sensitivities varied between 9% and 50%. Individuals with self-declared sensitivity to noise are characterized by high prevalence of multiple NSS, poorer health status and increased illness behavior independently of noise exposure levels. Findings support the notion that different types of environmental sensitivities partly overlap. - Highlights: • People with self-reported noise sensitivity experience multiple non-specific symptoms. • They also report comparatively poorer health and increased illness behavior. • Co-occurrence with other environmental sensitivities is moderate to high. • Road-traffic noise and GP-registered morbidity did not account for these results.« less

  6. Online and offline tools for head movement compensation in MEG.

    PubMed

    Stolk, Arjen; Todorovic, Ana; Schoffelen, Jan-Mathijs; Oostenveld, Robert

    2013-03-01

    Magnetoencephalography (MEG) is measured above the head, which makes it sensitive to variations of the head position with respect to the sensors. Head movements blur the topography of the neuronal sources of the MEG signal, increase localization errors, and reduce statistical sensitivity. Here we describe two novel and readily applicable methods that compensate for the detrimental effects of head motion on the statistical sensitivity of MEG experiments. First, we introduce an online procedure that continuously monitors head position. Second, we describe an offline analysis method that takes into account the head position time-series. We quantify the performance of these methods in the context of three different experimental settings, involving somatosensory, visual and auditory stimuli, assessing both individual and group-level statistics. The online head localization procedure allowed for optimal repositioning of the subjects over multiple sessions, resulting in a 28% reduction of the variance in dipole position and an improvement of up to 15% in statistical sensitivity. Offline incorporation of the head position time-series into the general linear model resulted in improvements of group-level statistical sensitivity between 15% and 29%. These tools can substantially reduce the influence of head movement within and between sessions, increasing the sensitivity of many cognitive neuroscience experiments. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Vantage Sensitivity: Environmental Sensitivity to Positive Experiences as a Function of Genetic Differences.

    PubMed

    Pluess, Michael

    2017-02-01

    A large number of gene-environment interaction studies provide evidence that some people are more likely to be negatively affected by adverse experiences as a function of specific genetic variants. However, such "risk" variants are surprisingly frequent in the population. Evolutionary analysis suggests that genetic variants associated with increased risk for maladaptive development under adverse environmental conditions are maintained in the population because they are also associated with advantages in response to different contextual conditions. These advantages may include (a) coexisting genetic resilience pertaining to other adverse influences, (b) a general genetic susceptibility to both low and high environmental quality, and (c) a coexisting propensity to benefit disproportionately from positive and supportive exposures, as reflected in the recent framework of vantage sensitivity. After introducing the basic properties of vantage sensitivity and highlighting conceptual similarities and differences with diathesis-stress and differential susceptibility patterns of gene-environment interaction, selected and recent empirical evidence for the notion of vantage sensitivity as a function of genetic differences is reviewed. The unique contribution that the new perspective of vantage sensitivity may make to our understanding of social inequality will be discussed after suggesting neurocognitive and molecular mechanisms hypothesized to underlie the propensity to benefit disproportionately from benevolent experiences. © 2015 Wiley Periodicals, Inc.

  8. Fracture experience among participants from the FROCAT study: what thresholding is appropriate using the FRAX tool?

    PubMed

    Azagra, R; Zwart, M; Aguyé, A; Martín-Sánchez, J C; Casado, E; Díaz-Herrera, M A; Moriña, D; Cooper, C; Díez-Pérez, A; Dennison, E M

    2016-01-01

    To perform an external validation of FRAX algorithm thresholds for reporting level of risk of fracture in Spanish women (low < 5%; intermediate ≥ 5% and < 7.5%; high ≥ 7.5%) taken from a prospective cohort "FRIDEX". A retrospective study of 1090 women aged ≥ 40 and ≤ 90 years old obtained from the general population (FROCAT cohort). FRAX was calculated with data registered in 2002. All fractures were validated in 2012. Sensitivity analysis was performed. When analyzing the cohort (884) excluding current or past anti osteoporotic medication (AOM), using our nominated thresholds, among the 621 (70.2%) women at low risk of fracture, 5.2% [CI95%: 3.4-7.6] sustained a fragility fracture; among the 99 at intermediate risk, 12.1% [6.4-20.2]; and among the 164 defined as high risk, 15.9% [10.6-24.2]. Sensitivity analysis against model risk stratification FRIDEX of FRAX Spain shows no significant difference. By including 206 women with AOM, the sensitivity analysis shows no difference in the group of intermediate and high risk and minimal differences in the low risk group. Our findings support and validate the use of FRIDEX thresholds of FRAX when discussing the risk of fracture and the initiation of therapy with patients. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Methodology for Sensitivity Analysis, Approximate Analysis, and Design Optimization in CFD for Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1996-01-01

    An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.

  10. Economic and clinical comparison of atypical depot antipsychotic drugs for treatment of chronic schizophrenia in the Czech Republic.

    PubMed

    Einarson, Thomas R; Zilbershtein, Roman; Skoupá, Jana; Veselá, Sárka; Garg, Madhur; Hemels, Michiel E H

    2013-09-01

    The Czech Republic is faced with making choices between pharmaceutical products, including depot injectable antipsychotics. A pharmacoeconomic analysis was conducted to determine the cost-effectiveness of atypical depots. An existing 1-year decision-analytic framework was adapted to model drug use in this healthcare system. The average direct costs to the General Insurance Company of the Czech Republic of using paliperidone palmitate (Xeplion®), risperidone (Risperdal Consta®), and olanzapine pamoate (Zypadhera®) were determined. Literature-derived clinical rates populated the model, with costs adjusted to 2012 Euros using the consumer price index. Outcomes included quality-adjusted life-years (QALYs), days in remission, and proportions hospitalized or visiting emergency rooms. One-way sensitivity analyses were calculated for all important inputs. A multivariate probability analysis was used to examine the stability of results using 10,000 iterations of simulated input over reasonable ranges of all included variables. Expected average costs/per patient treated were €5377 for PP-LAI, €6118 for RIS-LAI, and €6537 for OLZ-LAI. Respective QALYs were 0.817, 0.809, and 0.811; ER visits were 0.127, 0.134, and 0.141; hospitalizations were 0.252, 0.298, and 0.289. Results were generally robust in sensitivity analyses. PP-LAI dominated RIS-LAI and OLZ-LAI in 90.2% and 92.1% of simulations, respectively. Results were insensitive to drug prices but sensitive to adherence and hospitalization rates. PP-LAI dominated the other two drugs, as it had a lower overall cost and superior clinical outcomes, making it the preferred choice. Using PP-LAI in place of RIS-LAI for chronic relapsing schizophrenia would reduce the overall costs of care for the healthcare system.

  11. Sensitive and Specific Fluorescent Probes for Functional Analysis of the Three Major Types of Mammalian ABC Transporters

    PubMed Central

    Lebedeva, Irina V.; Pande, Praveen; Patton, Wayne F.

    2011-01-01

    An underlying mechanism for multi drug resistance (MDR) is up-regulation of the transmembrane ATP-binding cassette (ABC) transporter proteins. ABC transporters also determine the general fate and effect of pharmaceutical agents in the body. The three major types of ABC transporters are MDR1 (P-gp, P-glycoprotein, ABCB1), MRP1/2 (ABCC1/2) and BCRP/MXR (ABCG2) proteins. Flow cytometry (FCM) allows determination of the functional expression levels of ABC transporters in live cells, but most dyes used as indicators (rhodamine 123, DiOC2(3), calcein-AM) have limited applicability as they do not detect all three major types of ABC transporters. Dyes with broad coverage (such as doxorubicin, daunorubicin and mitoxantrone) lack sensitivity due to overall dimness and thus may yield a significant percentage of false negative results. We describe two novel fluorescent probes that are substrates for all three common types of ABC transporters and can serve as indicators of MDR in flow cytometry assays using live cells. The probes exhibit fast internalization, favorable uptake/efflux kinetics and high sensitivity of MDR detection, as established by multidrug resistance activity factor (MAF) values and Kolmogorov-Smirnov statistical analysis. Used in combination with general or specific inhibitors of ABC transporters, both dyes readily identify functional efflux and are capable of detecting small levels of efflux as well as defining the type of multidrug resistance. The assay can be applied to the screening of putative modulators of ABC transporters, facilitating rapid, reproducible, specific and relatively simple functional detection of ABC transporter activity, and ready implementation on widely available instruments. PMID:21799851

  12. Sensitive Multi-Species Emissions Monitoring: Infrared Laser-Based Detection of Trace-Level Contaminants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steill, Jeffrey D.; Huang, Haifeng; Hoops, Alexandra A.

    This report summarizes our development of spectroscopic chemical analysis techniques and spectral modeling for trace-gas measurements of highly-regulated low-concentration species present in flue gas emissions from utility coal boilers such as HCl under conditions of high humidity. Detailed spectral modeling of the spectroscopy of HCl and other important combustion and atmospheric species such as H 2 O, CO 2 , N 2 O, NO 2 , SO 2 , and CH 4 demonstrates that IR-laser spectroscopy is a sensitive multi-component analysis strategy. Experimental measurements from techniques based on IR laser spectroscopy are presented that demonstrate sub-ppm sensitivity levels to thesemore » species. Photoacoustic infrared spectroscopy is used to detect and quantify HCl at ppm levels with extremely high signal-to-noise even under conditions of high relative humidity. Additionally, cavity ring-down IR spectroscopy is used to achieve an extremely high sensitivity to combustion trace gases in this spectral region; ppm level CH 4 is one demonstrated example. The importance of spectral resolution in the sensitivity of a trace-gas measurement is examined by spectral modeling in the mid- and near-IR, and efforts to improve measurement resolution through novel instrument development are described. While previous project reports focused on benefits and complexities of the dual-etalon cavity ring-down infrared spectrometer, here details on steps taken to implement this unique and potentially revolutionary instrument are described. This report also illustrates and critiques the general strategy of IR- laser photodetection of trace gases leading to the conclusion that mid-IR laser spectroscopy techniques provide a promising basis for further instrument development and implementation that will enable cost-effective sensitive detection of multiple key contaminant species simultaneously.« less

  13. Stepwise sensitivity analysis from qualitative to quantitative: Application to the terrestrial hydrological modeling of a Conjunctive Surface-Subsurface Process (CSSP) land surface model

    NASA Astrophysics Data System (ADS)

    Gan, Yanjun; Liang, Xin-Zhong; Duan, Qingyun; Choi, Hyun Il; Dai, Yongjiu; Wu, Huan

    2015-06-01

    An uncertainty quantification framework was employed to examine the sensitivities of 24 model parameters from a newly developed Conjunctive Surface-Subsurface Process (CSSP) land surface model (LSM). The sensitivity analysis (SA) was performed over 18 representative watersheds in the contiguous United States to examine the influence of model parameters in the simulation of terrestrial hydrological processes. Two normalized metrics, relative bias (RB) and Nash-Sutcliffe efficiency (NSE), were adopted to assess the fit between simulated and observed streamflow discharge (SD) and evapotranspiration (ET) for a 14 year period. SA was conducted using a multiobjective two-stage approach, in which the first stage was a qualitative SA using the Latin Hypercube-based One-At-a-Time (LH-OAT) screening, and the second stage was a quantitative SA using the Multivariate Adaptive Regression Splines (MARS)-based Sobol' sensitivity indices. This approach combines the merits of qualitative and quantitative global SA methods, and is effective and efficient for understanding and simplifying large, complex system models. Ten of the 24 parameters were identified as important across different watersheds. The contribution of each parameter to the total response variance was then quantified by Sobol' sensitivity indices. Generally, parameter interactions contribute the most to the response variance of the CSSP, and only 5 out of 24 parameters dominate model behavior. Four photosynthetic and respiratory parameters are shown to be influential to ET, whereas reference depth for saturated hydraulic conductivity is the most influential parameter for SD in most watersheds. Parameter sensitivity patterns mainly depend on hydroclimatic regime, as well as vegetation type and soil texture. This article was corrected on 26 JUN 2015. See the end of the full text for details.

  14. Combining 'Bottom-Up' and 'Top-Down' Methods to Assess Ethnic Difference in Clearance: Bitopertin as an Example.

    PubMed

    Feng, Sheng; Shi, Jun; Parrott, Neil; Hu, Pei; Weber, Cornelia; Martin-Facklam, Meret; Saito, Tomohisa; Peck, Richard

    2016-07-01

    We propose a strategy for studying ethnopharmacology by conducting sequential physiologically based pharmacokinetic (PBPK) prediction (a 'bottom-up' approach) and population pharmacokinetic (popPK) confirmation (a 'top-down' approach), or in reverse order, depending on whether the purpose is ethnic effect assessment for a new molecular entity under development or a tool for ethnic sensitivity prediction for a given pathway. The strategy is exemplified with bitopertin. A PBPK model was built using Simcyp(®) to simulate the pharmacokinetics of bitopertin and to predict the ethnic sensitivity in clearance, given pharmacokinetic data in just one ethnicity. Subsequently, a popPK model was built using NONMEM(®) to assess the effect of ethnicity on clearance, using human data from multiple ethnic groups. A comparison was made to confirm the PBPK-based ethnic sensitivity prediction, using the results of the popPK analysis. PBPK modelling predicted that the bitopertin geometric mean clearance values after 20 mg oral administration in Caucasians would be 1.32-fold and 1.27-fold higher than the values in Chinese and Japanese, respectively. The ratios of typical clearance in Caucasians to the values in Chinese and Japanese estimated by popPK analysis were 1.20 and 1.17, respectively. The popPK analysis results were similar to the PBPK modelling results. As a general framework, we propose that PBPK modelling should be considered to predict ethnic sensitivity of pharmacokinetics prior to any human data and/or with data in only one ethnicity. In some cases, this will be sufficient to guide initial dose selection in different ethnicities. After clinical trials in different ethnicities, popPK analysis can be used to confirm ethnic differences and to support dose justification and labelling. PBPK modelling prediction and popPK analysis confirmation can complement each other to assess ethnic differences in pharmacokinetics at different drug development stages.

  15. Analysis of electrical tomography sensitive field based on multi-terminal network and electric field

    NASA Astrophysics Data System (ADS)

    He, Yongbo; Su, Xingguo; Xu, Meng; Wang, Huaxiang

    2010-08-01

    Electrical tomography (ET) aims at the study of the conductivity/permittivity distribution of the interested field non-intrusively via the boundary voltage/current. The sensor is usually regarded as an electric field, and finite element method (FEM) is commonly used to calculate the sensitivity matrix and to optimize the sensor architecture. However, only the lumped circuit parameters can be measured by the data acquisition electronics, it's very meaningful to treat the sensor as a multi terminal network. Two types of multi terminal network with common node and common loop topologies are introduced. Getting more independent measurements and making more uniform current distribution are the two main ways to minimize the inherent ill-posed effect. By exploring the relationships of network matrixes, a general formula is proposed for the first time to calculate the number of the independent measurements. Additionally, the sensitivity distribution is analyzed with FEM. As a result, quasi opposite mode, an optimal single source excitation mode, that has the advantages of more uniform sensitivity distribution and more independent measurements, is proposed.

  16. Fusion-neutron-yield, activation measurements at the Z accelerator: design, analysis, and sensitivity.

    PubMed

    Hahn, K D; Cooper, G W; Ruiz, C L; Fehl, D L; Chandler, G A; Knapp, P F; Leeper, R J; Nelson, A J; Smelser, R M; Torres, J A

    2014-04-01

    We present a general methodology to determine the diagnostic sensitivity that is directly applicable to neutron-activation diagnostics fielded on a wide variety of neutron-producing experiments, which include inertial-confinement fusion (ICF), dense plasma focus, and ion beam-driven concepts. This approach includes a combination of several effects: (1) non-isotropic neutron emission; (2) the 1/r(2) decrease in neutron fluence in the activation material; (3) the spatially distributed neutron scattering, attenuation, and energy losses due to the fielding environment and activation material itself; and (4) temporally varying neutron emission. As an example, we describe the copper-activation diagnostic used to measure secondary deuterium-tritium fusion-neutron yields on ICF experiments conducted on the pulsed-power Z Accelerator at Sandia National Laboratories. Using this methodology along with results from absolute calibrations and Monte Carlo simulations, we find that for the diagnostic configuration on Z, the diagnostic sensitivity is 0.037% ± 17% counts/neutron per cm(2) and is ∼ 40% less sensitive than it would be in an ideal geometry due to neutron attenuation, scattering, and energy-loss effects.

  17. Post-seismic relaxation following the 2009 April 6, L'Aquila (Italy), earthquake revealed by the mass position of a broad-band seismometer

    NASA Astrophysics Data System (ADS)

    Pino, Nicola Alessandro

    2012-06-01

    Post-seismic relaxation is known to occur after large or moderate earthquakes, on time scales ranging from days to years or even decades. In general, long-term deformation following seismic events has been detected by means of standard geodetic measurements, although seismic instruments are only used to estimate short timescale transient processes. Albeit inertial seismic sensors are also sensitive to rotation around their sensitive axes, the recording of very slow inclination of the ground surface at their standard output channels is practically impossible, because of their design characteristics. However, modern force-balance, broad-band seismometers provide the possibility to detect and measure slow surface inclination, through the analysis of the mass position signal. This output channel represents the integral of the broad-band velocity and is generally considered only for state-of-health diagnostics. In fact, the analysis of mass position data recorded at the time of the 2009 April 6, L'Aquila (MW= 6.3) earthquake, by a closely located STS-2 seismometer, evidenced the occurrence of a very low frequency signal, starting right at the time of the seismic event. This waveform is only visible on the horizontal components and is not related to the usual drift coupled with the temperature changes. This analysis suggests that the observed signal is to be ascribed to slowly developing ground inclination at the station site, caused by post-seismic relaxation following the main shock. The observed tilt reached 1.7 × 10-5 rad in about 2 months. This estimate is in very good agreement with the geodetic observations, giving comparable tilt magnitude and direction at the same site. This study represents the first seismic analysis ever for the mass position signal, suggesting useful applications for usually neglected data.

  18. Detection of Celiac Disease and Lymphocytic Enteropathy by Parallel Serology and Histopathology in a Population-Based Study

    PubMed Central

    Walker, Marjorie M.; Murray, Joseph A.; Ronkainen, Jukka; Aro, Pertti; Storskrubb, Tom; D’Amato, Mauro; Lahr, Brian; Talley, Nicholas J.; Agreus, Lars

    2010-01-01

    Background & Aims Although serological analysis is used in diagnosis of celiac disease, histopathology is considered most reliable. We performed a prospective study to determine the clinical, pathological and serological spectrum of celiac disease in a general population (Kalixanda study). Methods A random sample of an adult general population (n=1000) was analyzed by upper endoscopy, duodenal biopsy, and serological analysis of tissue transglutaminase (tTg) levels; endomysial antibody (EMA) levels were analyzed in samples that were tTg+. The cutoff values for diagnosis of celiac disease were villous atrophy with 40 intraepithelial lymphocytes (IELs)/100 enterocytes (ECs). Results Samples from 33 subjects were tTg+ and 16 were EMA+. Histological analysis identified 7/1000 subjects (0.7%) with celiac disease; all were tTg+ and 6/7 were EMA+. Another 26 subjects were tTg+ (7/26 EMA+). This was addressed by a second quantitative pathology study, (nested case-control design) using a threshold of 25 IELS/100 ECs. In this analysis, all 13 samples that were tTg+ and EMA+ had ≥25 IELs/100ECs. In total, 16 subjects (1.6%) had serological and histological evidence of gluten-sensitive enteropathy. IELs were quantified in duodenal biopsy samples from seronegative individuals (n=500); 19 (3.8%) had >25 IELs and lymphocytic duodenosis (LD). Conclusions Measurement of ≥25 IELs/100 ECs correlated with serological indicators of celiac disease; a higher IEL threshold could miss 50% of cases. Quantification of tTg is a sensitive test for celiac disease; diagnosis can be confirmed by observation of ≥25 IELs/100ECs in duodenal biopsies. Lymphocytic enteropathy (celiac disease and LD) is common in the population (5.4%). PMID:20398668

  19. Sources of sensitization, cross-reactions, and occupational sensitization to topical anaesthetics among general dermatology patients.

    PubMed

    Jussi, Liippo; Lammintausta, Kaija

    2009-03-01

    Contact sensitization to local anaesthetics is often from topical medicaments. Occupational sensitization to topical anaesthetics may occur in certain occupations. The aim of the study was to analyse the occurrence of contact sensitization to topical anaesthetics in general dermatology patients. Patch testing with topical anaesthetics was carried out in 620 patients. Possible sources of sensitization and the clinical histories of the patients are analysed. Positive patch test reactions to one or more topical anaesthetics were seen in 25/620 patients. Dibucaine reactions were most common (20/25), and lidocaine sensitization was seen in two patients. Six patients had reactions to ester-type and/or amide-type anaesthetics concurrently. Local preparations for perianal conditions were the most common sensitizers. One patient had developed occupational sensitization to procaine with multiple cross-reactions and with concurrent penicillin sensitization from procaine penicillin. Dibucaine-containing perianal medicaments are the major source of contact sensitization to topical anaesthetics. Although sensitization to multiple anaesthetics can be seen, cross-reactions are possible. Contact sensitization to lidocaine is not common, and possible cross-reactions should be determined when reactions to lidocaine are seen. Occupational procaine sensitization from veterinary medicaments is a risk among animal workers.

  20. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less

  1. Optimizing Spectral Wave Estimates with Adjoint-Based Sensitivity Maps

    DTIC Science & Technology

    2014-02-18

    J, Orzech MD, Ngodock HE (2013) Validation of a wave data assimilation system based on SWAN. Geophys Res Abst, (15), EGU2013-5951-1, EGU General ...surface wave spectra. Sensitivity maps are generally constructed for a selected system indicator (e.g., vorticity) by computing the differential of...spectral action balance Eq. 2, generally initialized at the off- shore boundary with spectral wave and other outputs from regional models such as

  2. Clinical usefulness of the clock drawing test applying rasch analysis in predicting of cognitive impairment.

    PubMed

    Yoo, Doo Han; Lee, Jae Shin

    2016-07-01

    [Purpose] This study examined the clinical usefulness of the clock drawing test applying Rasch analysis for predicting the level of cognitive impairment. [Subjects and Methods] A total of 187 stroke patients with cognitive impairment were enrolled in this study. The 187 patients were evaluated by the clock drawing test developed through Rasch analysis along with the mini-mental state examination of cognitive evaluation tool. An analysis of the variance was performed to examine the significance of the mini-mental state examination and the clock drawing test according to the general characteristics of the subjects. Receiver operating characteristic analysis was performed to determine the cutoff point for cognitive impairment and to calculate the sensitivity and specificity values. [Results] The results of comparison of the clock drawing test with the mini-mental state showed significant differences in according to gender, age, education, and affected side. A total CDT of 10.5, which was selected as the cutoff point to identify cognitive impairement, showed a sensitivity, specificity, Youden index, positive predictive, and negative predicive values of 86.4%, 91.5%, 0.8, 95%, and 88.2%. [Conclusion] The clock drawing test is believed to be useful in assessments and interventions based on its excellent ability to identify cognitive disorders.

  3. Outlier analysis of functional genomic profiles enriches for oncology targets and enables precision medicine.

    PubMed

    Zhu, Zhou; Ihle, Nathan T; Rejto, Paul A; Zarrinkar, Patrick P

    2016-06-13

    Genome-scale functional genomic screens across large cell line panels provide a rich resource for discovering tumor vulnerabilities that can lead to the next generation of targeted therapies. Their data analysis typically has focused on identifying genes whose knockdown enhances response in various pre-defined genetic contexts, which are limited by biological complexities as well as the incompleteness of our knowledge. We thus introduce a complementary data mining strategy to identify genes with exceptional sensitivity in subsets, or outlier groups, of cell lines, allowing an unbiased analysis without any a priori assumption about the underlying biology of dependency. Genes with outlier features are strongly and specifically enriched with those known to be associated with cancer and relevant biological processes, despite no a priori knowledge being used to drive the analysis. Identification of exceptional responders (outliers) may not lead only to new candidates for therapeutic intervention, but also tumor indications and response biomarkers for companion precision medicine strategies. Several tumor suppressors have an outlier sensitivity pattern, supporting and generalizing the notion that tumor suppressors can play context-dependent oncogenic roles. The novel application of outlier analysis described here demonstrates a systematic and data-driven analytical strategy to decipher large-scale functional genomic data for oncology target and precision medicine discoveries.

  4. Microgravity isolation system design: A modern control synthesis framework

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.

    1994-01-01

    Manned orbiters will require active vibration isolation for acceleration-sensitive microgravity science experiments. Since umbilicals are highly desirable or even indispensable for many experiments, and since their presence greatly affects the complexity of the isolation problem, they should be considered in control synthesis. In this paper a general framework is presented for applying extended H2 synthesis methods to the three-dimensional microgravity isolation problem. The methodology integrates control and state frequency weighting and input and output disturbance accommodation techniques into the basic H2 synthesis approach. The various system models needed for design and analysis are also presented. The paper concludes with a discussion of a general design philosophy for the microgravity vibration isolation problem.

  5. Microgravity isolation system design: A modern control synthesis framework

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.

    1994-01-01

    Manned orbiters will require active vibration isolation for acceleration-sensitive microgravity science experiments. Since umbilicals are highly desirable or even indispensable for many experiments, and since their presence greatly affects the complexity of the isolation problem, they should be considered in control synthesis. A general framework is presented for applying extended H2 synthesis methods to the three-dimensional microgravity isolation problem. The methodology integrates control and state frequency weighting and input and output disturbance accommodation techniques into the basic H2 synthesis approach. The various system models needed for design and analysis are also presented. The paper concludes with a discussion of a general design philosophy for the microgravity vibration isolation problem.

  6. Evaluation of Uncertainty and Sensitivity in Environmental Modeling at a Radioactive Waste Management Site

    NASA Astrophysics Data System (ADS)

    Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.

    2002-05-01

    Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more information by quantifying the relative importance of each input parameter in predicting the model response. However, in these complex, high dimensional eco-system models, represented by the RWMS model, the dynamics of the systems can act in a non-linear manner. Quantitatively assessing the importance of input variables becomes more difficult as the dimensionality, the non-linearities, and the non-monotonicities of the model increase. Methods from data mining such as Multivariate Adaptive Regression Splines (MARS) and the Fourier Amplitude Sensitivity Test (FAST) provide tools that can be used in global sensitivity analysis in these high dimensional, non-linear situations. The enhanced interpretability of model output provided by the quantitative measures estimated by these global sensitivity analysis tools will be demonstrated using the RWMS model.

  7. A Generalized Perturbation Theory Solver In Rattlesnake Based On PETSc With Application To TREAT Steady State Uncertainty Quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schunert, Sebastian; Wang, Congjian; Wang, Yaqi

    Rattlesnake and MAMMOTH are the designated TREAT analysis tools currently being developed at the Idaho National Laboratory. Concurrent with development of the multi-physics, multi-scale capabilities, sensitivity analysis and uncertainty quantification (SA/UQ) capabilities are required for predicitive modeling of the TREAT reactor. For steady-state SA/UQ, that is essential for setting initial conditions for the transients, generalized perturbation theory (GPT) will be used. This work describes the implementation of a PETSc based solver for the generalized adjoint equations that constitute a inhomogeneous, rank deficient problem. The standard approach is to use an outer iteration strategy with repeated removal of the fundamental modemore » contamination. The described GPT algorithm directly solves the GPT equations without the need of an outer iteration procedure by using Krylov subspaces that are orthogonal to the operator’s nullspace. Three test problems are solved and provide sufficient verification for the Rattlesnake’s GPT capability. We conclude with a preliminary example evaluating the impact of the Boron distribution in the TREAT reactor using perturbation theory.« less

  8. The roles of effective communication and client engagement in delivering culturally sensitive care to immigrant parents of children with disabilities.

    PubMed

    King, Gillian; Desmarais, Chantal; Lindsay, Sally; Piérart, Geneviève; Tétreault, Sylvie

    2015-01-01

    Delivering pediatric rehabilitation services to immigrant parents of children with disabilities requires the practice of culturally sensitive care. Few studies have examined the specific nature of culturally sensitive care in pediatric rehabilitation, especially the notions of effective communication and client engagement. Interviews were held with 42 therapists (10 social workers, 16 occupational therapists and 16 speech language pathologists) from two locations in Canada (Toronto and Quebec City). Data were analyzed using an inductive content analysis approach. Study themes included the importance and nature of effective communication and client engagement in service delivery involving immigrant parents. Participants discussed using four main types of strategies to engage immigrant parents, including understanding the family situation, building a collaborative relationship, tailoring practice to the client's situation and ensuring parents' understanding of therapy procedures. The findings illuminate the importance of effective, two-way communication in providing the mutual understanding needed by therapists to engage parents in the intervention process. The findings also richly describe the engagement strategies used by therapists. Clinical implications include recommendations for strategies for therapists to employ to engage this group of parents. Furthermore, the findings are applicable to service provision in general, as engaging families in a collaborative relationship through attention to their specific situation is a general principle of good quality, family-centered care. Implications for Rehabilitation Effective communication permeates the delivery of culturally sensitive care and provides mutual understanding, which is fundamental to client engagement. The findings illuminate the nature of "partnership" by indicating the role of collaborative therapist strategies in facilitating engagement. Four main strategies facilitate effective communication and client engagement, including understanding the family situation, building a collaborative relationship, tailoring practice to the client's situation and ensuring parents' understanding of therapy procedures. Engaging families in a collaborative relationship through attention to their specific situation is a general principle of good quality, family-centered care.

  9. Comparison of methods used to diagnose generalized inflammatory disease in manatees (Trichechus manatus latirostris)

    USGS Publications Warehouse

    Harr, K.E.; Harvey, J.W.; Bonde, R.K.; Murphy, D.; Lowe, Mark; Menchaca, M.; Haubold, E.M.; Francis-Floyd, R.

    2006-01-01

    Manatees (Trichechus manatus latirostris) are afflicted with inflammatory and infectious disease secondary to human interaction, such as boat strike and entanglement, as well as “cold stress syndrome” and pneumonia. White-blood-cell count and fever, primary indicators of systemic inflammation in most species, are insensitive in diagnosing inflammatory disease in manatees. Acute phase-response proteins, such as haptoglobin and serum amyloid A, have proven to be sensitive measures of inflammation/infection in domestic large animal species. This study assessed diagnosis of generalized inflammatory disease by different methods including total white-blood-cell count, albumin: globulin ratio, gel electrophoresis analysis, C-reactive protein, alpha1 acid glycoprotein, haptoglobin, fibrinogen, and serum amyloid A. Samples were collected from 71 apparently healthy and 27 diseased animals during diagnostic medical examination. Serum amyloid A, measured by ELISA, followed by albumin:globulin ratio, measured by plasma gel electrophoresis, were most sensitive in diagnosing inflammatory disease, with diagnostic sensitivity and specificity of approximately 90%. The reference interval for serum amyloid A is <10–50 μg/ml with an equivocal interval of 51–70 μg/ml. The reference interval for albumin:globulin ratio by plasma gel electrophoresis is 0.7–1.1. Albumin: globulin ratio, calculated using biochemical techniques, was not accurate due to overestimation of albumin by bromcresol green dye-binding methodology. Albumin:globulin ratio, measured by serum gel electrophoresis, has a low sensitivity of 15% due to the lack of fibrinogen in the sample. Haptoglobin, measured by hemoglobin titration, had a reference interval of 0.4–2.4 mg/ml, a diagnostic sensitivity of 60%, and a diagnostic specificity of 93%. The haptoglobin assay is significantly affected by hemolysis. Fibrinogen, measured by heat precipitation, has a reference interval of 100–400 mg/dl, a diagnostic sensitivity of 40%, and a diagnostic specificity of 95%.

  10. Comparison of methods used to diagnose generalized inflammatory disease in manatees (Trichechus manatus latirostris).

    PubMed

    Harr, Kendal; Harvey, John; Bonde, Robert; Murphy, David; Lowe, Mark; Menchaca, Maya; Haubold, Elsa; Francis-Floyd, Ruth

    2006-06-01

    Manatees (Trichechus manatus latirostris) are afflicted with inflammatory and infectious disease secondary to human interaction, such as boat strike and entanglement, as well as "cold stress syndrome" and pneumonia. White-blood-cell count and fever, primary indicators of systemic inflammation in most species, are insensitive in diagnosing inflammatory disease in manatees. Acute phase-response proteins, such as haptoglobin and serum amyloid A, have proven to be sensitive measures of inflammation/infection in domestic large animal species. This study assessed diagnosis of generalized inflammatory disease by different methods including total white-blood-cell count, albumin: globulin ratio, gel electrophoresis analysis, C-reactive protein, alpha, acid glycoprotein, haptoglobin, fibrinogen, and serum amyloid A. Samples were collected from 71 apparently healthy and 27 diseased animals during diagnostic medical examination. Serum amyloid A, measured by ELISA, followed by albumin:globulin ratio, measured by plasma gel electrophoresis, were most sensitive in diagnosing inflammatory disease, with diagnostic sensitivity and specificity of approximately 90%. The reference interval for serum amyloid A is <10-50 microg/ml with an equivocal interval of 51-70 microg/ml. The reference interval for albumin:globulin ratio by plasma gel electrophoresis is 0.7-1.1. Albumin: globulin ratio, calculated using biochemical techniques, was not accurate due to overestimation of albumin by bromcresol green dye-binding methodology. Albumin:globulin ratio, measured by serum gel electrophoresis, has a low sensitivity of 15% due to the lack of fibrinogen in the sample. Haptoglobin, measured by hemoglobin titration, had a reference interval of 0.4-2.4 mg/ml, a diagnostic sensitivity of 60%, and a diagnostic specificity of 93%. The haptoglobin assay is significantly affected by hemolysis. Fibrinogen, measured by heat precipitation, has a reference interval of 100-400 mg/dl, a diagnostic sensitivity of 40%, and a diagnostic specificity of 95%.

  11. Probabilistic Structural Analysis Program

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Chamis, Christos C.; Murthy, Pappu L. N.; Stefko, George L.; Riha, David S.; Thacker, Ben H.; Nagpal, Vinod K.; Mital, Subodh K.

    2010-01-01

    NASA/NESSUS 6.2c is a general-purpose, probabilistic analysis program that computes probability of failure and probabilistic sensitivity measures of engineered systems. Because NASA/NESSUS uses highly computationally efficient and accurate analysis techniques, probabilistic solutions can be obtained even for extremely large and complex models. Once the probabilistic response is quantified, the results can be used to support risk-informed decisions regarding reliability for safety-critical and one-of-a-kind systems, as well as for maintaining a level of quality while reducing manufacturing costs for larger-quantity products. NASA/NESSUS has been successfully applied to a diverse range of problems in aerospace, gas turbine engines, biomechanics, pipelines, defense, weaponry, and infrastructure. This program combines state-of-the-art probabilistic algorithms with general-purpose structural analysis and lifting methods to compute the probabilistic response and reliability of engineered structures. Uncertainties in load, material properties, geometry, boundary conditions, and initial conditions can be simulated. The structural analysis methods include non-linear finite-element methods, heat-transfer analysis, polymer/ceramic matrix composite analysis, monolithic (conventional metallic) materials life-prediction methodologies, boundary element methods, and user-written subroutines. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. NASA/NESSUS 6.2c is structured in a modular format with 15 elements.

  12. Systematic Sensitivity Analysis of Metabolic Controllers During Reductions in Skeletal Muscle Blood Flow

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Cabrera, Marco

    2000-01-01

    An acute reduction in oxygen delivery to skeletal muscle is generally associated with profound derangements in substrate metabolism. Given the complexity of the human bioenergetic system and its components, it is difficult to quantify the interaction of cellular metabolic processes to maintain ATP homeostasis during stress (e.g., hypoxia, ischemia, and exercise). Of special interest is the determination of mechanisms relating tissue oxygenation to observed metabolic responses at the tissue, organ, and whole body levels and the quantification of how changes in oxygen availability affect the pathways of ATP synthesis and their regulation. In this study, we apply a previously developed mathematical model of human bioenergetics to study effects of ischemia during periods of increased ATP turnover (e.g., exercise). By using systematic sensitivity analysis the oxidative phosphorylation rate was found to be the most important rate parameter affecting lactate production during ischemia under resting conditions. Here we examine whether mild exercise under ischemic conditions alters the relative importance of pathways and parameters previously obtained.

  13. Small Crack Growth and Its Influence in Near Alpha-Titanium Alloys

    DTIC Science & Technology

    1989-06-01

    geometries via finite element and boundary-collocation analysis 8 , 9 . Elastic plastic fracture mechanics ( EPFM ) 1 0 , 1 1 and local crack tip field...correlation was found between experimental and predicted data, general application of the model is not possible as both 0 and rp are sensitive to changes in...cracks at low AK the load reduction schemes should be altered to remove the residual deformations, perhaps via machining or the application of large

  14. Lichens as bioindicators of air quality. Forest Service general technical report (Final)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stolte, K.; Doty, R.; Mangis, D.

    1993-03-01

    The report is the result of a workshop held in Denver, Colorado on April 9-11, 1991. It summarizes the current literature and techniques for using lichens to monitor air quality. Experts in lichenology and ecology contributed information on lichen floristics, characterization of monitoring sites, lichen species and communities, identifying lichen species sensitive to pollutants, active monitoring with transplants, chemical analysis of lichens, and case studies as examples of lichen biomonitoring scenarios.

  15. A Review of Positive Ion Sensitivities for the SIMS Analysis of CMT

    DTIC Science & Technology

    1991-05-01

    microprobe. Inter-laboratory exercises organised by NRL using standardised glasses and steels’ s showed considerable agreement usually within a factor...would be sufficient oxygen to convert all the remaining matrix atoms to oxides, TeO2 and CdO. Any general theory of the lonisation of sputtered particles...Eggert equation which works well for many other matrices, such as metals, glasses and ceramics. Despite decades of basic studies there is still no

  16. White Light Optical Processing and Holography.

    DTIC Science & Technology

    1982-10-01

    of the object beam. The major problem in image deblurring is noise in the dclurred image. There are two kinds of noise : S (a) False images. The...reducing the noise this work is described in Sec. 3. 2. We addressed the bias buildup and SNR in incoherent optical processing, making an analysis that...system is generally better than the coherent for SNR. Thus, if we have a sensitive, low- noise detector at the output of an incoherent system, we should

  17. Pain Sensitivity Risk Factors for Chronic TMD: Descriptive Data and Empirically Identified Domains from the OPPERA Case Control Study

    PubMed Central

    Greenspan, Joel D.; Slade, Gary D.; Bair, Eric; Dubner, Ronald; Fillingim, Roger B.; Ohrbach, Richard; Knott, Charlie; Mulkey, Flora; Rothwell, Rebecca; Maixner, William

    2011-01-01

    Many studies report that people with temporomandibular disorders (TMD) are more sensitive to experimental pain stimuli than TMD-free controls. Such differences in sensitivity are observed in remote body sites as well as in the orofacial region, suggesting a generalized upregulation of nociceptive processing in TMD cases. This large case-control study of 185 adults with TMD and 1,633 TMD-free controls measured sensitivity to painful pressure, mechanical cutaneous, and heat stimuli, using multiple testing protocols. Based on an unprecedented 36 experimental pain measures, 28 showed statistically significantly greater pain sensitivity in TMD cases than controls. The largest effects were seen for pressure pain thresholds at multiple body sites and cutaneous mechanical pain threshold. The other mechanical cutaneous pain measures and many of the heat pain measures showed significant differences, but with lesser effect sizes. Principal component analysis (PCA) of the pain measures derived from 1,633 controls identified five components labeled: (1) heat pain ratings, (2) heat pain aftersensations and tolerance, (3) mechanical cutaneous pain sensitivity, (4) pressure pain thresholds, and (5) heat pain temporal summation. These results demonstrate that, compared to TMD-free controls, chronic TMD cases are more sensitive to many experimental noxious stimuli at extra-cranial body sites, and provides for the first time the ability to directly compare the case-control effect sizes of a wide range of pain sensitivity measures. PMID:22074753

  18. Sensitivity of transitions in internal rotor molecules to a possible variation of the proton-to-electron mass ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jansen, Paul; Ubachs, Wim; Bethlem, Hendrick L.

    2011-12-15

    Recently, methanol was identified as a sensitive target system to probe variations of the proton-to-electron mass ratio {mu}[Jansen et al., Phys. Rev. Lett. 106, 100801 (2011)]. The high sensitivity of methanol originates from the interplay between overall rotation and hindered internal rotation of the molecule; that is, transitions that convert internal rotation energy into overall rotation energy, or vice versa, have an enhanced sensitivity coefficient, K{sub {mu}}. As internal rotation is a common phenomenon in polyatomic molecules, it is likely that other molecules display similar or even larger effects. In this paper we generalize the concepts that form the foundationmore » of the high sensitivity in methanol and use this to construct an approximate model which makes it possible to estimate the sensitivities of transitions in internal rotor molecules with C{sub 3v} symmetry, without performing a full calculation of energy levels. We find that a reliable estimate of transition sensitivities can be obtained from the three rotational constants (A, B, and C) and three torsional constants (F, V{sub 3}, and {rho}). This model is verified by comparing obtained sensitivities for methanol, acetaldehyde, acetamide, methyl formate, and acetic acid with a full analysis of the molecular Hamiltonian. Of the molecules considered, methanol is by far the most suitable candidate for laboratory and cosmological tests searching for a possible variation of {mu}.« less

  19. Prostate Cancer Information Available in Health-Care Provider Offices: An Analysis of Content, Readability, and Cultural Sensitivity.

    PubMed

    Choi, Seul Ki; Seel, Jessica S; Yelton, Brooks; Steck, Susan E; McCormick, Douglas P; Payne, Johnny; Minter, Anthony; Deutchki, Elizabeth K; Hébert, James R; Friedman, Daniela B

    2018-07-01

    Prostate cancer (PrCA) is the most common cancer affecting men in the United States, and African American men have the highest incidence among men in the United States. Little is known about the PrCA-related educational materials being provided to patients in health-care settings. Content, readability, and cultural sensitivity of materials available in providers' practices in South Carolina were examined. A total of 44 educational materials about PrCA and associated sexual dysfunction was collected from 16 general and specialty practices. The content of the materials was coded, and cultural sensitivity was assessed using the Cultural Sensitivity Assessment Tool. Flesch Reading Ease, Flesch-Kincaid Grade Level, and the Simple Measure of Gobbledygook were used to assess readability. Communication with health-care providers (52.3%), side effects of PrCA treatment (40.9%), sexual dysfunction and its treatment (38.6%), and treatment options (34.1%) were frequently presented. All materials had acceptable cultural sensitivity scores; however, 2.3% and 15.9% of materials demonstrated unacceptable cultural sensitivity regarding format and visual messages, respectively. Readability of the materials varied. More than half of the materials were written above a high-school reading level. PrCA-related materials available in health-care practices may not meet patients' needs regarding content, cultural sensitivity, and readability. A wide range of educational materials that address various aspects of PrCA, including treatment options and side effects, should be presented in plain language and be culturally sensitive.

  20. Assessment of cross-reactivity among five species of house dust and storage mites.

    PubMed

    Saridomichelakis, Manolis N; Marsella, Rosanna; Lee, Kenneth W; Esch, Robert E; Farmaki, Rania; Koutinas, Alexander F

    2008-04-01

    In vitro cross-reactivity among two house dust (Dermatophagoides farinae, D. pteronyssinus) and three storage (Acarus siro, Tyrophagus putrescentiae, Lepidoglyphus destructor) mites was examined in 20 mite-sensitive dogs with natural occurring atopic dermatitis (group A), 13 high-IgE beagles experimentally sensitized to D. farinae (group B), and five healthy beagles (group C). Intradermal testing (IDT) and serology for allergen-specific IgE demonstrated that co-sensitization for all possible pairs of the five mites was generally 45% or higher among group A dogs. In the same dogs, enzyme-linked immunosorbent assay cross-inhibition results indicated that each one of D. farinae, A. siro and T. putrescentiae was a strong inhibitor of all the remaining mites, whereas D. pteronyssinus was a strong inhibitor of L. destructor. A high number of positive IDT and serology test results for D. pteronyssinus, A. siro, T. putrescentiae and L. destructor were recorded among group B dogs. No conclusive evidence of exposure to these mites was found upon analysis of dust samples from their environment and their food for the presence of mites and guanine. Also, the number of positive test results was generally higher among group B than among group C dogs. Enzyme-linked immunosorbent assay cross-inhibition revealed that D. farinae was a strong inhibitor of D. pteronyssinus, A. siro and T. putrescentiae. Collectively, these results demonstrated extensive in vitro cross-reactivity among house dust and/or storage mites that can explain false-positive results upon testing of dust mite-sensitive dogs with atopic dermatitis.

  1. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    NASA Technical Reports Server (NTRS)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  2. Health-related quality of life in end-stage COPD and lung cancer patients.

    PubMed

    Habraken, Jolanda M; ter Riet, Gerben; Gore, Justin M; Greenstone, Michael A; Weersink, Els J M; Bindels, Patrick J E; Willems, Dick L

    2009-06-01

    Historically, palliative care has been developed for cancer patients and is not yet generally available for patients suffering from chronic life-limiting illnesses, such as chronic obstructive pulmonary disease (COPD). To examine whether COPD patients experience similar or worse disease burden in comparison with non-small cell lung cancer (NSCLC) patients, we compared the health-related quality of life (HRQOL) scores of severe COPD patients with those of advanced NSCLC patients. We also formally updated previous evidence in this area provided by a landmark study published by Gore et al. in 2000. In updating this previous evidence, we addressed the methodological limitations of this study and a number of confounding variables. Eighty-two GOLD IV COPD patients and 19 Stage IIIb or IV NSCLC patients completed generic and disease-specific HRQOL questionnaires. We used an individual patient data meta-analysis to integrate the new and existing evidence (total n=201). Finally, to enhance between-group comparability, we performed a sensitivity analysis using a subgroup of patients with a similar degree of "terminality," namely those who had died within one year after study entry. Considerable differences in HRQOL were found for physical functioning, social functioning, mental health, general health perceptions, dyspnea, activities of daily living, and depression. All differences favored the NSCLC patients. The sensitivity analysis, using only terminal NSCLC and COPD patients, confirmed these findings. In conclusion, end-stage COPD patients experience poor HRQOL comparable to or worse than that of advanced NSCLC patients. We discuss these findings in the light of the notion that these COPD patients may have a similar need for palliative care.

  3. Screening for alcohol use disorders and at-risk drinking in the general population: psychometric performance of three questionnaires.

    PubMed

    Rumpf, Hans-Jürgen; Hapke, Ulfert; Meyer, Christian; John, Ulrich

    2002-01-01

    Most screening questionnaires are developed in clinical settings and there are few data on their performance in the general population. This study provides data on the area under the receiver-operating characteristic (ROC) curve, sensitivity, specificity, and internal consistency of the Alcohol Use Disorders Identification Test (AUDIT), the consumption questions of the AUDIT (AUDIT-C) and the Lübeck Alcohol Dependence and Abuse Screening Test (LAST) among current drinkers (n = 3551) of a general population sample in northern Germany. Alcohol dependence and misuse according to DSM-IV and at-risk drinking served as gold standards to assess sensitivity and specificity and were assessed with the Munich-Composite Diagnostic Interview (M-CIDI). AUDIT and LAST showed insufficient sensitivity for at-risk drinking and alcohol misuse using standard cut-off scores, but satisfactory detection rates for alcohol dependence. The AUDIT-C showed low specificity in all criterion groups with standard cut-off. Adjusted cut-points are recommended. Among a subsample of individuals with previous general hospital admission in the last year, all questionnaires showed higher internal consistency suggesting lower reliability in non-clinical samples. In logistic regression analyses, having had a hospital admission increased the sensitivity in detecting any criterion group of the LAST, and the number of recent general practice visits increased the sensitivity of the AUDIT in detecting alcohol misuse. Women showed lower scores and larger areas under the ROC curves. It is concluded that setting specific instruments (e.g. primary care or general population) or adjusted cut-offs should be used.

  4. Coronary arteriography in a district general hospital: feasibility, safety, and diagnostic accuracy.

    PubMed Central

    Ranjadayalan, K; Mills, P G; Sprigings, D C; Mourad, K; Magee, P; Timmis, A D

    1990-01-01

    OBJECTIVE--To determine the feasibility, safety, and diagnostic accuracy of coronary arteriography in the radiology department of a district general hospital using conventional fluoroscopy and videotape recording. DESIGN--Observational study of the feasibility and safety of coronary arteriography in a district general hospital and analysis of its diagnostic accuracy by prospective within patient comparison of the video recordings with cinearteriograms obtained in a catheter laboratory. SETTING--Radiology department of a district general hospital and the catheter laboratory of a cardiological referral centre. SUBJECTS--50 Patients with acute myocardial infarction treated with streptokinase who underwent coronary arteriography in a district general hospital three (two to five) days after admission. 45 Of these patients had repeat coronary arteriography after four (three to seven) days in the catheter laboratory of a cardiological referral centre. MAIN OUTCOME MEASURES--Incidence of complications associated with catheterisation and the sensitivity and specificity of video recordings in the district general hospital (judged by two experienced observers) for identifying the location and severity of coronary stenoses. RESULTS--Coronary arteriograms recorded on videotape in the district general hospital were obtained in 47 cases and apart from one episode of ventricular fibrilation (treated successfully by cardioversion) there were no complications of the procedure. 45 Patients were transferred for investigation in the catheter laboratory, providing 45 paired coronary arteriograms recorded on videotape and cine film. The specificity of the video recordings for identifying the location and severity of coronary stenoses was over 90%. Sensitivity, however, was lower and for one observer fell below 40% for lesions in the circumflex artery. A cardiothoracic surgeon judged that only nine of the 47 video recordings were adequate for assessing revascularisation requirements. CONCLUSIONS--Coronary arteriography in the radiology department of a district general hospital is safe and feasible. Nevertheless, the quality of image with conventional fluoroscopy and video film is inadequate and will need to be improved before coronary arteriography in this setting can be recommended. PMID:2182164

  5. Selected elements in major minerals from bituminous coal as determined by INAA: Implications for removing environmentally sensitive elements from coal

    USGS Publications Warehouse

    Palmer, C.A.; Lyons, P.C.

    1996-01-01

    The four most abundant minerals generally found in Euramerican bituminous coals are quartz, kaolinite, illite and pyrite. These four minerals were isolated by density separation and handpicking from bituminous coal samples collected in the Ruhr Basin, Germany and the Appalachian basin, U.S.A. Trace-element concentrations of relatively pure (??? 99+%) separates of major minerals from these coals were determined directly by using instrumental neutron activation analysis (INAA). As expected, quartz contributes little to the trace-element mass balance. Illite generally has higher trace-element concentrations than kaolinite, but, for the concentrates analyzed in this study, Hf, Ta, W, Th and U are in lower concentrations in illite than in kaolinite. Pyrite has higher concentrations of chalcophile elements (e.g., As and Se) and is considerably lower in lithophile elements as compared to kaolinite and illite. Our study provides a direct and sensitive method of determining trace-element relationships with minerals in coal. Mass-balance calculations suggest that the trace-element content of coal can be explained mainly by three major minerals: pyrite, kaolinite and illite. This conclusion indicates that the size and textural relationships of these major coal minerals may be a more important consideration as to whether coal cleaning can effectively remove the most environmentally sensitive trace elements in coal than what trace minerals are present.

  6. Clinical signs of early osteoarthritis: reproducibility and relation to x ray changes in 541 women in the general population.

    PubMed Central

    Hart, D J; Spector, T D; Brown, P; Wilson, P; Doyle, D V; Silman, A J

    1991-01-01

    The definition and classification of early clinically apparent osteoarthritis both in clinical situations and in epidemiological surveys remains a problem. Few data exist on the between-observer reproducibility of simple clinical methods of detecting hand and knee osteoarthritis in the population and their sensitivity and specificity as compared with radiography. Two observers first studied the reproducibility of a number of clinical signs in 41 middle aged women. Good rates of agreement were found for most of the clinical signs tested (kappa = 0.54-1.0). The more reproducible signs were then tested on a population of 541 women, aged 45-65, drawn from general practice, screening centres, and patients previously attending hospital for non-rheumatic problems. The major clinical signs used had a high specificity (87-99%) and lower sensitivity (20-49%) when compared with radiographs graded on the Kellgren and Lawrence scale (2+ = positive). When analysis was restricted to symptomatic radiographic osteoarthritis, levels of sensitivity were increased and specificity was lowered. These data show that certain physical signs of osteoarthritis are reproducible and may be used to identify clinical disease. They are not a substitute for radiographs, however, if radiographic change is regarded as the 'gold standard' of diagnosis. As the clinical signs tested seemed specific for osteoarthritis they may be of value in screening populations for clinical disease. PMID:1877852

  7. Helicopter gust response characteristics including unsteady aerodynamic stall effects

    NASA Technical Reports Server (NTRS)

    Arcidiacono, P. J.; Bergquist, R. R.; Alexander, W. T., Jr.

    1974-01-01

    The results of an analytical study to evaluate the general response characteristics of a helicopter subjected to various types of discrete gust encounters are presented. The analysis employed was a nonlinear coupled, multi-blade rotorfuselage analysis including the effects of blade flexibility and unsteady aerodynamic stall. Only the controls-fixed response of the basic aircraft without any aircraft stability augmentation was considered. A discussion of the basic differences between gust sensitivity of fixed and rotary wing aircraft is presented. The effects of several rotor configuration and aircraft operating parameters on initial gust-induced load factor and blade vibratory stress and pushrod loads are discussed.

  8. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  9. Mean-intercept anisotropy analysis of porous media. II. Conceptual shortcomings of the MIL tensor definition and Minkowski tensors as an alternative.

    PubMed

    Klatt, Michael A; Schröder-Turk, Gerd E; Mecke, Klaus

    2017-07-01

    Structure-property relations, which relate the shape of the microstructure to physical properties such as transport or mechanical properties, need sensitive measures of structure. What are suitable fabric tensors to quantify the shape of anisotropic heterogeneous materials? The mean intercept length is among the most commonly used characteristics of anisotropy in porous media, e.g., of trabecular bone in medical physics. Yet, in this series of two papers we demonstrate that it has conceptual shortcomings that limit the validity of its results. We test the validity of general assumptions regarding the properties of the mean-intercept length tensor using analytical formulas for the mean-intercept lengths in anisotropic Boolean models (derived in part I of this series), augmented by numerical simulations. We discuss in detail the functional form of the mean intercept length as a function of the test line orientations. As the most prominent result, we find that, at least for the example of overlapping grains modeling porous media, the polar plot of the mean intercept length is in general not an ellipse and hence not represented by a second-rank tensor. This is in stark contrast to the common understanding that for a large collection of grains the mean intercept length figure averages to an ellipse. The standard mean intercept length tensor defined by a least-square fit of an ellipse is based on a model mismatch, which causes an intrinsic lack of accuracy. Our analysis reveals several shortcomings of the mean intercept length tensor analysis that pose conceptual problems and limitations on the information content of this commonly used analysis method. We suggest the Minkowski tensors from integral geometry as alternative sensitive measures of anisotropy. The Minkowski tensors allow for a robust, comprehensive, and systematic approach to quantify various aspects of structural anisotropy. We show the Minkowski tensors to be more sensitive, in the sense, that they can quantify the remnant anisotropy of structures not captured by the mean intercept length analysis. If applied to porous tissue and microstructures, this improved structure characterization can yield new insights into the relationships between geometry and material properties. © 2017 American Association of Physicists in Medicine.

  10. Protectiveness of species sensitivity distribution hazard concentrations for acute toxicity used in endangered species risk assessment.

    PubMed

    Raimondo, Sandy; Vivian, Deborah N; Delos, Charles; Barron, Mace G

    2008-12-01

    A primary objective of threatened and endangered species conservation is to ensure that chemical contaminants and other stressors do not adversely affect listed species. Assessments of the ecological risks of chemical exposures to listed species often rely on the use of surrogate species, safety factors, and species sensitivity distributions (SSDs) of chemical toxicity; however, the protectiveness of these approaches can be uncertain. We comprehensively evaluated the protectiveness of SSD first and fifth percentile hazard concentrations (HC1, HC5) relative to the application of safety factors using 68 SSDs generated from 1,482 acute (lethal concentration of 50%, or LC50) toxicity records for 291 species, including 24 endangered species (20 fish, four mussels). The SSD HC5s and HCls were lower than 97 and 99.5% of all endangered species mean acute LC50s, respectively. The HC5s were significantly less than the concentrations derived from applying safety factors of 5 and 10 to rainbow trout (Oncorhynchus mykiss) toxicity data, and the HCls were generally lower than the concentrations derived from a safety factor of 100 applied to rainbow trout toxicity values. Comparison of relative sensitivity (SSD percentiles) of broad taxonomic groups showed that crustaceans were generally the most sensitive taxa and taxa sensitivity was related to chemical mechanism of action. Comparison of relative sensitivity of narrow fish taxonomic groups showed that standard test fish species were generally less sensitive than salmonids and listed fish. We recommend the use of SSDs as a distribution-based risk assessment approach that is generally protective of listed species.

  11. Ignoring correlation in uncertainty and sensitivity analysis in life cycle assessment: what is the risk?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groen, E.A., E-mail: Evelyne.Groen@gmail.com; Heijungs, R.; Leiden University, Einsteinweg 2, Leiden 2333 CC

    Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlationsmore » between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.« less

  12. Hydraulic head interpolation using ANFIS—model selection and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Kurtulus, Bedri; Flipo, Nicolas

    2012-01-01

    The aim of this study is to investigate the efficiency of ANFIS (adaptive neuro fuzzy inference system) for interpolating hydraulic head in a 40-km 2 agricultural watershed of the Seine basin (France). Inputs of ANFIS are Cartesian coordinates and the elevation of the ground. Hydraulic head was measured at 73 locations during a snapshot campaign on September 2009, which characterizes low-water-flow regime in the aquifer unit. The dataset was then split into three subsets using a square-based selection method: a calibration one (55%), a training one (27%), and a test one (18%). First, a method is proposed to select the best ANFIS model, which corresponds to a sensitivity analysis of ANFIS to the type and number of membership functions (MF). Triangular, Gaussian, general bell, and spline-based MF are used with 2, 3, 4, and 5 MF per input node. Performance criteria on the test subset are used to select the 5 best ANFIS models among 16. Then each is used to interpolate the hydraulic head distribution on a (50×50)-m grid, which is compared to the soil elevation. The cells where the hydraulic head is higher than the soil elevation are counted as "error cells." The ANFIS model that exhibits the less "error cells" is selected as the best ANFIS model. The best model selection reveals that ANFIS models are very sensitive to the type and number of MF. Finally, a sensibility analysis of the best ANFIS model with four triangular MF is performed on the interpolation grid, which shows that ANFIS remains stable to error propagation with a higher sensitivity to soil elevation.

  13. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    DOE PAGES

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less

  14. Wavenumber selection based analysis in Raman spectroscopy improves skin cancer diagnostic specificity at high sensitivity levels (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhua; Zeng, Haishan; Kalia, Sunil; Lui, Harvey

    2017-02-01

    Background: Raman spectroscopy is a non-invasive optical technique which can measure molecular vibrational modes within tissue. A large-scale clinical study (n = 518) has demonstrated that real-time Raman spectroscopy could distinguish malignant from benign skin lesions with good diagnostic accuracy; this was validated by a follow-up independent study (n = 127). Objective: Most of the previous diagnostic algorithms have typically been based on analyzing the full band of the Raman spectra, either in the fingerprint or high wavenumber regions. Our objective in this presentation is to explore wavenumber selection based analysis in Raman spectroscopy for skin cancer diagnosis. Methods: A wavenumber selection algorithm was implemented using variably-sized wavenumber windows, which were determined by the correlation coefficient between wavenumbers. Wavenumber windows were chosen based on accumulated frequency from leave-one-out cross-validated stepwise regression or least and shrinkage selection operator (LASSO). The diagnostic algorithms were then generated from the selected wavenumber windows using multivariate statistical analyses, including principal component and general discriminant analysis (PC-GDA) and partial least squares (PLS). A total cohort of 645 confirmed lesions from 573 patients encompassing skin cancers, precancers and benign skin lesions were included. Lesion measurements were divided into training cohort (n = 518) and testing cohort (n = 127) according to the measurement time. Result: The area under the receiver operating characteristic curve (ROC) improved from 0.861-0.891 to 0.891-0.911 and the diagnostic specificity for sensitivity levels of 0.99-0.90 increased respectively from 0.17-0.65 to 0.20-0.75 by selecting specific wavenumber windows for analysis. Conclusion: Wavenumber selection based analysis in Raman spectroscopy improves skin cancer diagnostic specificity at high sensitivity levels.

  15. Economic Evaluation of First-Line Treatments for Metastatic Renal Cell Carcinoma: A Cost-Effectiveness Analysis in A Health Resource–Limited Setting

    PubMed Central

    Wu, Bin; Dong, Baijun; Xu, Yuejuan; Zhang, Qiang; Shen, Jinfang; Chen, Huafeng; Xue, Wei

    2012-01-01

    Background To estimate, from the perspective of the Chinese healthcare system, the economic outcomes of five different first-line strategies among patients with metastatic renal cell carcinoma (mRCC). Methods and Findings A decision-analytic model was developed to simulate the lifetime disease course associated with renal cell carcinoma. The health and economic outcomes of five first-line strategies (interferon-alfa, interleukin-2, interleukin-2 plus interferon-alfa, sunitinib and bevacizumab plus interferon-alfa) were estimated and assessed by indirect comparison. The clinical and utility data were taken from published studies. The cost data were estimated from local charge data and current Chinese practices. Sensitivity analyses were used to explore the impact of uncertainty regarding the results. The impact of the sunitinib patient assistant program (SPAP) was evaluated via scenario analysis. The base-case analysis showed that the sunitinib strategy yielded the maximum health benefits: 2.71 life years and 1.40 quality-adjusted life-years (QALY). The marginal cost-effectiveness (cost per additional QALY) gained via the sunitinib strategy compared with the conventional strategy was $220,384 (without SPAP, interleukin-2 plus interferon-alfa and bevacizumab plus interferon-alfa were dominated) and $16,993 (with SPAP, interferon-alfa, interleukin-2 plus interferon-alfa and bevacizumab plus interferon-alfa were dominated). In general, the results were sensitive to the hazard ratio of progression-free survival. The probabilistic sensitivity analysis demonstrated that the sunitinib strategy with SPAP was the most cost-effective approach when the willingness-to-pay threshold was over $16,000. Conclusions Our analysis suggests that traditional cytokine therapy is the cost-effective option in the Chinese healthcare setting. In some relatively developed regions, sunitinib with SPAP may be a favorable cost-effective alternative for mRCC. PMID:22412884

  16. Economic evaluation of first-line treatments for metastatic renal cell carcinoma: a cost-effectiveness analysis in a health resource-limited setting.

    PubMed

    Wu, Bin; Dong, Baijun; Xu, Yuejuan; Zhang, Qiang; Shen, Jinfang; Chen, Huafeng; Xue, Wei

    2012-01-01

    To estimate, from the perspective of the Chinese healthcare system, the economic outcomes of five different first-line strategies among patients with metastatic renal cell carcinoma (mRCC). A decision-analytic model was developed to simulate the lifetime disease course associated with renal cell carcinoma. The health and economic outcomes of five first-line strategies (interferon-alfa, interleukin-2, interleukin-2 plus interferon-alfa, sunitinib and bevacizumab plus interferon-alfa) were estimated and assessed by indirect comparison. The clinical and utility data were taken from published studies. The cost data were estimated from local charge data and current Chinese practices. Sensitivity analyses were used to explore the impact of uncertainty regarding the results. The impact of the sunitinib patient assistant program (SPAP) was evaluated via scenario analysis. The base-case analysis showed that the sunitinib strategy yielded the maximum health benefits: 2.71 life years and 1.40 quality-adjusted life-years (QALY). The marginal cost-effectiveness (cost per additional QALY) gained via the sunitinib strategy compared with the conventional strategy was $220,384 (without SPAP, interleukin-2 plus interferon-alfa and bevacizumab plus interferon-alfa were dominated) and $16,993 (with SPAP, interferon-alfa, interleukin-2 plus interferon-alfa and bevacizumab plus interferon-alfa were dominated). In general, the results were sensitive to the hazard ratio of progression-free survival. The probabilistic sensitivity analysis demonstrated that the sunitinib strategy with SPAP was the most cost-effective approach when the willingness-to-pay threshold was over $16,000. Our analysis suggests that traditional cytokine therapy is the cost-effective option in the Chinese healthcare setting. In some relatively developed regions, sunitinib with SPAP may be a favorable cost-effective alternative for mRCC.

  17. The Modularized Software Package ASKI - Full Waveform Inversion Based on Waveform Sensitivity Kernels Utilizing External Seismic Wave Propagation Codes

    NASA Astrophysics Data System (ADS)

    Schumacher, F.; Friederich, W.

    2015-12-01

    We present the modularized software package ASKI which is a flexible and extendable toolbox for seismic full waveform inversion (FWI) as well as sensitivity or resolution analysis operating on the sensitivity matrix. It utilizes established wave propagation codes for solving the forward problem and offers an alternative to the monolithic, unflexible and hard-to-modify codes that have typically been written for solving inverse problems. It is available under the GPL at www.rub.de/aski. The Gauss-Newton FWI method for 3D-heterogeneous elastic earth models is based on waveform sensitivity kernels and can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. The kernels are derived in the frequency domain from Born scattering theory as the Fréchet derivatives of linearized full waveform data functionals, quantifying the influence of elastic earth model parameters on the particular waveform data values. As an important innovation, we keep two independent spatial descriptions of the earth model - one for solving the forward problem and one representing the inverted model updates. Thereby we account for the independent needs of spatial model resolution of forward and inverse problem, respectively. Due to pre-integration of the kernels over the (in general much coarser) inversion grid, storage requirements for the sensitivity kernels are dramatically reduced.ASKI can be flexibly extended to other forward codes by providing it with specific interface routines that contain knowledge about forward code-specific file formats and auxiliary information provided by the new forward code. In order to sustain flexibility, the ASKI tools must communicate via file output/input, thus large storage capacities need to be accessible in a convenient way. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion.

  18. Gamma Spectroscopy by Artificial Neural Network Coupled with MCNP

    NASA Astrophysics Data System (ADS)

    Sahiner, Huseyin

    While neutron activation analysis is widely used in many areas, sensitivity of the analysis depends on how the analysis is conducted. Even though the sensitivity of the techniques carries error, compared to chemical analysis, its range is in parts per million or sometimes billion. Due to this sensitivity, the use of neutron activation analysis becomes important when analyzing bio-samples. Artificial neural network is an attractive technique for complex systems. Although there are neural network applications on spectral analysis, training by simulated data to analyze experimental data has not been made. This study offers an improvement on spectral analysis and optimization on neural network for the purpose. The work considers five elements that are considered as trace elements for bio-samples. However, the system is not limited to five elements. The only limitation of the study comes from data library availability on MCNP. A perceptron network was employed to identify five elements from gamma spectra. In quantitative analysis, better results were obtained when the neural fitting tool in MATLAB was used. As a training function, Levenberg-Marquardt algorithm was used with 23 neurons in the hidden layer with 259 gamma spectra in the input. Because the interest of the study deals with five elements, five neurons representing peak counts of five isotopes in the input layer were used. Five output neurons revealed mass information of these elements from irradiated kidney stones. Results showing max error of 17.9% in APA, 24.9% in UA, 28.2% in COM, 27.9% in STRU type showed the success of neural network approach in analyzing gamma spectra. This high error was attributed to Zn that has a very long decay half-life compared to the other elements. The simulation and experiments were made under certain experimental setup (3 hours irradiation, 96 hours decay time, 8 hours counting time). Nevertheless, the approach is subject to be generalized for different setups.

  19. Sensitivity and bias under conditions of equal and unequal academic task difficulty.

    PubMed

    Reed, Derek D; Martens, Brian K

    2008-01-01

    We conducted an experimental analysis of children's relative problem-completion rates across two workstations under conditions of equal (Experiment 1) and unequal (Experiment 2) problem difficulty. Results were described using the generalized matching equation and were evaluated for degree of schedule versus stimulus control. Experiment 1 involved a symmetrical choice arrangement in which the children could earn points exchangeable for rewards contingent on correct math problem completion. Points were delivered according to signaled variable-interval schedules at each workstation. For 2 children, relative rates of problem completion appeared to have been controlled by the schedule requirements in effect and matched relative rates of reinforcement, with sensitivity values near 1 and bias values near 0. Experiment 2 involved increasing the difficulty of math problems at one of the workstations. Sensitivity values for all 3 participants were near 1, but a substantial increase in bias toward the easier math problems was observed. This bias was possibly associated with responding at the more difficult workstation coming under stimulus control rather than schedule control.

  20. Enhanced external and culturally sensitive attributions after extended intercultural contact.

    PubMed

    Vollhardt, Johanna Ray

    2010-06-01

    This study examined the effect of close and extended intercultural contact on attributions for behaviour of out-group members. Specifically, it was hypothesized that extended intercultural contact would enhance the ability to make external and culturally sensitive attributions for ambiguous behaviour of out-group members, while decreasing the common tendency to overestimate internal factors. A content analysis of open-ended attributions supported these hypotheses, revealing that majority group members in Germany who had hosted an exchange student from another continent used significantly less internal and more external as well as culturally sensitive attributions to explain the behaviour described in critical intercultural incidents, compared to future hosts. The effect remained significant when controlling for perspective taking and prior intercultural experience. Moreover, the hypothesis was supported for scenarios describing different cultural groups (regardless of the exchange students' country of origin), suggesting a generalized effect. Problems of selection bias are discussed, and the importance of studying a range of positive outcomes of intercultural contact is emphasized.

  1. The Diversity of Cloud Responses to Twentieth Century Sea Surface Temperatures

    NASA Astrophysics Data System (ADS)

    Silvers, Levi G.; Paynter, David; Zhao, Ming

    2018-01-01

    Low-level clouds are shown to be the conduit between the observed sea surface temperatures (SST) and large decadal fluctuations of the top of the atmosphere radiative imbalance. The influence of low-level clouds on the climate feedback is shown for global mean time series as well as particular geographic regions. The changes of clouds are found to be important for a midcentury period of high sensitivity and a late century period of low sensitivity. These conclusions are drawn from analysis of amip-piForcing simulations using three atmospheric general circulation models (AM2.1, AM3, and AM4.0). All three models confirm the importance of the relationship between the global climate sensitivity and the eastern Pacific trends of SST and low-level clouds. However, this work argues that the variability of the climate feedback parameter is not driven by stratocumulus-dominated regions in the eastern ocean basins, but rather by the cloudy response in the rest of the tropics.

  2. Hypnotic Hypersensitivity to Volatile Anesthetics and Dexmedetomidine in Dopamine β-Hydroxylase Knockout Mice

    PubMed Central

    Hu, Frances Y.; Hanna, George M.; Han, Wei; Mardini, Feras; Thomas, Steven A.; Wyner, Abraham J.; Kelz, Max B.

    2012-01-01

    BACKGROUND Multiple lines of evidence suggest that the adrenergic system can modulate sensitivity to anesthetic-induced immobility and anesthetic-induced hypnosis as well. However, several considerations prevent the conclusion that the endogenous adrenergic ligands norepinephrine and epinephrine alter anesthetic sensitivity. METHODS Using dopamine β-hydroxylase (Dbh−/−) mice genetically engineered to lack the adrenergic ligands and their siblings with normal adrenergic levels, we test the contribution of the adrenergic ligands upon volatile anesthetic induction and emergence. Moreover, we investigate the effects of intravenous dexmedetomidine in adrenergic-deficient mice and their siblings using both righting reflex and processed electroencephalographic measures of anesthetic hypnosis. RESULTS We demonstrate that the loss of norepinephrine and epinephrine and not other neuromodulators copackaged in adrenergic neurons is sufficient to cause hypersensitivity to induction of volatile anesthesia. However, the most profound effect of adrenergic deficiency is retarding emergence from anesthesia, which takes two to three times as long in Dbh−/− mice for sevoflurane, isoflurane, and halothane. Having shown that Dbh−/− mice are hypersensitive to volatile anesthetics, we further demonstrate that their hypnotic hypersensitivity persists at multiple doses of dexmedetomidine. Dbh−/− mice exhibit up to 67% shorter latencies to loss of righting reflex and up to 545% longer durations of dexmedetomidine-induced general anesthesia. Central rescue of adrenergic signaling restores control-like dexmedetomidine sensitivity. A novel continuous electroencephalographic analysis illustrates that the longer duration of dexmedetomidine-induced hypnosis is not due to a motor confound, but occurs because of impaired anesthetic emergence. CONCLUSIONS Adrenergic signaling is essential for normal emergence from general anesthesia. Dexmedetomidine-induced general anesthesia does not depend upon inhibition of adrenergic neurotransmission. PMID:23042227

  3. Metrics for Evaluating the Accuracy of Solar Power Forecasting: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J.; Hodge, B. M.; Florita, A.

    2013-10-01

    Forecasting solar energy generation is a challenging task due to the variety of solar power systems and weather regimes encountered. Forecast inaccuracies can result in substantial economic losses and power system reliability issues. This paper presents a suite of generally applicable and value-based metrics for solar forecasting for a comprehensive set of scenarios (i.e., different time horizons, geographic locations, applications, etc.). In addition, a comprehensive framework is developed to analyze the sensitivity of the proposed metrics to three types of solar forecasting improvements using a design of experiments methodology, in conjunction with response surface and sensitivity analysis methods. The resultsmore » show that the developed metrics can efficiently evaluate the quality of solar forecasts, and assess the economic and reliability impact of improved solar forecasting.« less

  4. Robust Optimization and Sensitivity Analysis with Multi-Objective Genetic Algorithms: Single- and Multi-Disciplinary Applications

    DTIC Science & Technology

    2007-01-01

    multi-disciplinary optimization with uncertainty. Robust optimization and sensitivity analysis is usually used when an optimization model has...formulation is introduced in Section 2.3. We briefly discuss several definitions used in the sensitivity analysis in Section 2.4. Following in...2.5. 2.4 SENSITIVITY ANALYSIS In this section, we discuss several definitions used in Chapter 5 for Multi-Objective Sensitivity Analysis . Inner

  5. Not all ultrasounds are created equal: general sonography versus musculoskeletal sonography in the detection of rotator cuff tears

    PubMed Central

    Cole, Brandi; Twibill, Kristen; Lam, Patrick; Hackett, Lisa

    2016-01-01

    Background This cross-sectional analytic diagnostic accuracy study was designed to compare the accuracy of ultrasound performed by general sonographers in local radiology practices with ultrasound performed by an experienced musculoskeletal sonographer for the detection of rotator cuff tears. Methods In total, 238 patients undergoing arthroscopy who had previously had an ultrasound performed by both a general sonographer and a specialist musculoskeletal sonographer made up the study cohort. Accuracy of diagnosis was compared with the findings at arthroscopy. Results When analyzed as all tears versus no tears, musculoskeletal sonography had an accuracy of 97%, a sensitivity of 97% and a specificity of 95%, whereas general sonography had an accuracy of 91%, a sensitivity of 91% and a specificity of 86%. When the partial tears were split with those ≥ 50% thickness in the tear group and those < 50% thickness in the no-tear group, musculoskeletal sonography had an accuracy of 97%, a sensitivity of 97% and a specificity of 100% and general sonography had an accuracy of 85%, a sensitivity of 84% and a specificity of 87%. Conclusions Ultrasound in the hands of an experienced musculoskeletal sonographer is highly accurate for the diagnosis of rotator cuff tears. General sonography has improved subsequent to earlier studies but remains inferior to an ultrasound performed by a musculoskeletal sonographer. PMID:27660657

  6. The Importance of Proving the Null

    PubMed Central

    Gallistel, C. R.

    2010-01-01

    Null hypotheses are simple, precise, and theoretically important. Conventional statistical analysis cannot support them; Bayesian analysis can. The challenge in a Bayesian analysis is to formulate a suitably vague alternative, because the vaguer the alternative is (the more it spreads out the unit mass of prior probability), the more the null is favored. A general solution is a sensitivity analysis: Compute the odds for or against the null as a function of the limit(s) on the vagueness of the alternative. If the odds on the null approach 1 from above as the hypothesized maximum size of the possible effect approaches 0, then the data favor the null over any vaguer alternative to it. The simple computations and the intuitive graphic representation of the analysis are illustrated by the analysis of diverse examples from the current literature. They pose 3 common experimental questions: (a) Are 2 means the same? (b) Is performance at chance? (c) Are factors additive? PMID:19348549

  7. Thermal analysis of a conceptual design for a 250 We GPHS/FPSE space power system

    NASA Technical Reports Server (NTRS)

    Mccomas, Thomas J.; Dugan, Edward T.

    1991-01-01

    A thermal analysis has been performed for a 250-We space nuclear power system which combines the US Department of Energy's general purpose heat source (GPHS) modules with a state-of-the-art free-piston Stirling engine (FPSE). The focus of the analysis is on the temperature of the indium fuel clad within the GPHS modules. The thermal analysis results indicate fuel clad temperatures slightly higher than the design goal temperature of 1573 K. The results are considered favorable due to numerous conservative assumptions used. To demonstrate the effects of the conservatism, a brief sensitivity analysis is performed in which a few of the key system parameters are varied to determine their effect on the fuel clad temperatures. It is shown that thermal analysis of a more detailed thermal mode should yield fuel clad temperatures below 1573 K.

  8. Effect of high protein vs high carbohydrate intake on insulin sensitivity, body weight, hemoglobin A1c, and blood pressure in patients with type 2 diabetes mellitus.

    PubMed

    Sargrad, Karin R; Homko, Carol; Mozzoli, Maria; Boden, Guenther

    2005-04-01

    Extremely low carbohydrate/high protein diets are popular methods of weight loss. Compliance with these diets is poor and long-term effectiveness and the safety of these diets for patients with type 2 diabetes is not known. The objective of the current study was to evaluate effects of less extreme changes in carbohydrate or protein diets on weight, insulin sensitivity, glycemic control, cardiovascular risk factors (blood pressure, lipid levels), and renal function in obese inner-city patients with type 2 diabetes. Study patients were admitted to the General Clinical Research Center for 24 hours for initial tests including a hyperinsulinemic-euglycemic clamp (for measurement of insulin sensitivity), bioelectrical impedance analysis (BIA) and anthropometric measurements (for assessment of body composition), indirect calorimetry (for measurement of REE), electronic blood pressure monitoring, and blood chemistries to measure blood lipids levels along with renal and hepatic functions. Six patients with type 2 diabetes (five women and one man) were randomly assigned to the high-protein diet (40% carbohydrate, 30% protein, 30% fat) and six patients (four women and two men) to the high-carbohydrate diet (55% carbohydrate, 15% protein, 30% fat). All patients returned to the General Clinical Research Center weekly for monitoring of food records; dietary compliance; and measurements of body weight, blood pressure, and blood glucose. After 8 weeks on these diets, all patients were readmitted to the General Clinical Research Center for the same series of tests. Twelve study patients were taught to select either the high-protein or high-carbohydrate diet and were followed for 8 weeks. Insulin sensitivity, hemoglobin A1c, weight, and blood pressure were measured. Statistical significance was assessed using two-tailed Student's t tests and two-way repeated measures analysis of variance. Both the high-carbohydrate and high-protein groups lost weight (-2.2+/-0.9 kg, -2.5+/-1.6 kg, respectively, P <.05) and the difference between the groups was not significant (P =.9). In the high-carbohydrate group, hemoglobin A1c decreased (from 8.2% to 6.9%, P <.03), fasting plasma glucose decreased (from 8.8 to 7.2 mmol/L, P <.02), and insulin sensitivity increased (from 12.8 to 17.2 micromol/kg/min, P <.03). No significant changes in these parameters occurred in the high-protein group, instead systolic and diastolic blood pressures decreased (-10.5+/-2.3 mm Hg, P =.003 and -18+/-9.0 mm Hg, P <.05, respectively). After 2 months on these hypocaloric diets, each diet had either no or minimal effects on lipid levels (total cholesterol, low-density lipoprotein, high-density lipoprotein), renal (blood urea nitrogen, serum creatinine), or hepatic function (aspartate aminotransferase, alanine aminotransferase, bilirubin).

  9. Distrust As a Disease Avoidance Strategy: Individual Differences in Disgust Sensitivity Regulate Generalized Social Trust.

    PubMed

    Aarøe, Lene; Osmundsen, Mathias; Petersen, Michael Bang

    2016-01-01

    Throughout human evolutionary history, cooperative contact with others has been fundamental for human survival. At the same time, social contact has been a source of threats. In this article, we focus on one particular viable threat, communicable disease, and investigate how motivations to avoid pathogens influence people's propensity to interact and cooperate with others, as measured by individual differences in generalized social trust. While extant studies on pathogen avoidance have argued that such motivations should prompt people to avoid interactions with outgroups specifically, we argue that these motivations should prompt people to avoid others more broadly. Empirically, we utilize two convenience samples and a large nationally representative sample of US citizens to demonstrate the existence of a robust and replicable effect of individual differences in pathogen disgust sensitivity on generalized social trust. We furthermore compare the effects of pathogen disgust sensitivity on generalized social trust and outgroup prejudice and explore whether generalized social trust to some extent constitutes a pathway between pathogen avoidance motivations and prejudice.

  10. Distrust As a Disease Avoidance Strategy: Individual Differences in Disgust Sensitivity Regulate Generalized Social Trust

    PubMed Central

    Aarøe, Lene; Osmundsen, Mathias; Petersen, Michael Bang

    2016-01-01

    Throughout human evolutionary history, cooperative contact with others has been fundamental for human survival. At the same time, social contact has been a source of threats. In this article, we focus on one particular viable threat, communicable disease, and investigate how motivations to avoid pathogens influence people's propensity to interact and cooperate with others, as measured by individual differences in generalized social trust. While extant studies on pathogen avoidance have argued that such motivations should prompt people to avoid interactions with outgroups specifically, we argue that these motivations should prompt people to avoid others more broadly. Empirically, we utilize two convenience samples and a large nationally representative sample of US citizens to demonstrate the existence of a robust and replicable effect of individual differences in pathogen disgust sensitivity on generalized social trust. We furthermore compare the effects of pathogen disgust sensitivity on generalized social trust and outgroup prejudice and explore whether generalized social trust to some extent constitutes a pathway between pathogen avoidance motivations and prejudice. PMID:27516744

  11. Quality theory paper writing for medical examinations.

    PubMed

    Shukla, Samarth; Acharya, Sourya; Acharya, Neema; Shrivastava, Tripti; Kale, Anita

    2014-04-01

    Aim & Objectives: Developing a tactful paper writing skill, through delivery and depiction of the necessary expressions required for in standard or superior essay writing. Understanding relevance and tact of theoretical expression in exam paper writing Learning Indices of standard or quality theory/essay answer (SAQ/LAQ). Applying knowledge and skill gained through these theory writing exercises and assignments to achieve high or better scores in examinations. The study subjects were divided into two groups- Group A (17 students) and Group B students (10students). The students were selected from II M.B.B.S 4(th) term. Students of Group A were sensitized on how to write a theory paper and went through 4 phases namely pre-sensitization test, sensitization (imparting them with skills of good theory paper writing through home assignments and deliberations/ guidance), post-sensitization test and Evaluation. Students of Group A (17 students) undertook theory tests (twice, i.e. before and after sensitization) and Students of Group B (10 students) who were not sensitized and took the theory test with post sensitized Group A students (random 10 students). Both groups were given general pathology as the test syllabus, taught to both groups in didactic lectures during the last 6 months. The results of pre and Post-sensitization tests from both groups were analyzed. Intra group comparisons (pre sensitized Group A with Post sensitized Group A) and inter group comparisons (Non-sensitized group B with Sensitized Group A) were made. Significant results were found between results of pre and Post-sensitization tests in Group A (intra group analysis) and inter group (Group A and B) Post-sensitization tests, as there was remarkable improvement in student theory paper writing skills post sensitizing the students of Group A. Medical students should be mandatorily guided and exposed to the nuances and tact of writing the theory paper for their examinations, as it definitely gives them better understanding of presentations ultimately improving their score in the theory exams.

  12. 48 CFR 3004.470 - Security requirements for access to unclassified facilities, Information Technology resources...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... access to unclassified facilities, Information Technology resources, and sensitive information. 3004.470... Technology resources, and sensitive information. ... ACQUISITION REGULATION (HSAR) GENERAL ADMINISTRATIVE MATTERS Safeguarding Classified and Sensitive Information...

  13. 48 CFR 3004.470 - Security requirements for access to unclassified facilities, Information Technology resources...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... access to unclassified facilities, Information Technology resources, and sensitive information. 3004.470... Technology resources, and sensitive information. ... ACQUISITION REGULATION (HSAR) GENERAL ADMINISTRATIVE MATTERS Safeguarding Classified and Sensitive Information...

  14. 48 CFR 3004.470 - Security requirements for access to unclassified facilities, Information Technology resources...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... access to unclassified facilities, Information Technology resources, and sensitive information. 3004.470... Technology resources, and sensitive information. ... ACQUISITION REGULATION (HSAR) GENERAL ADMINISTRATIVE MATTERS Safeguarding Classified and Sensitive Information...

  15. 48 CFR 3004.470 - Security requirements for access to unclassified facilities, Information Technology resources...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... access to unclassified facilities, Information Technology resources, and sensitive information. 3004.470... Technology resources, and sensitive information. ... ACQUISITION REGULATION (HSAR) GENERAL ADMINISTRATIVE MATTERS Safeguarding Classified and Sensitive Information...

  16. 48 CFR 3004.470 - Security requirements for access to unclassified facilities, Information Technology resources...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... access to unclassified facilities, Information Technology resources, and sensitive information. 3004.470... Technology resources, and sensitive information. ... ACQUISITION REGULATION (HSAR) GENERAL ADMINISTRATIVE MATTERS Safeguarding Classified and Sensitive Information...

  17. Updated Chemical Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    2005-01-01

    An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.

  18. Marine electrical resistivity imaging of submarine groundwater discharge: Sensitivity analysis and application in Waquoit Bay, Massachusetts, USA

    USGS Publications Warehouse

    Henderson, Rory; Day-Lewis, Frederick D.; Abarca, Elena; Harvey, Charles F.; Karam, Hanan N.; Liu, Lanbo; Lane, John W.

    2010-01-01

    Electrical resistivity imaging has been used in coastal settings to characterize fresh submarine groundwater discharge and the position of the freshwater/salt-water interface because of the relation of bulk electrical conductivity to pore-fluid conductivity, which in turn is a function of salinity. Interpretation of tomograms for hydrologic processes is complicated by inversion artifacts, uncertainty associated with survey geometry limitations, measurement errors, and choice of regularization method. Variation of seawater over tidal cycles poses unique challenges for inversion. The capabilities and limitations of resistivity imaging are presented for characterizing the distribution of freshwater and saltwater beneath a beach. The experimental results provide new insight into fresh submarine groundwater discharge at Waquoit Bay National Estuarine Research Reserve, East Falmouth, Massachusetts (USA). Tomograms from the experimental data indicate that fresh submarine groundwater discharge may shut down at high tide, whereas temperature data indicate that the discharge continues throughout the tidal cycle. Sensitivity analysis and synthetic modeling provide insight into resolving power in the presence of a time-varying saline water layer. In general, vertical electrodes and cross-hole measurements improve the inversion results regardless of the tidal level, whereas the resolution of surface arrays is more sensitive to time-varying saline water layer.

  19. Evaluation of prediction capability, robustness, and sensitivity in non-linear landslide susceptibility models, Guantánamo, Cuba

    NASA Astrophysics Data System (ADS)

    Melchiorre, C.; Castellanos Abella, E. A.; van Westen, C. J.; Matteucci, M.

    2011-04-01

    This paper describes a procedure for landslide susceptibility assessment based on artificial neural networks, and focuses on the estimation of the prediction capability, robustness, and sensitivity of susceptibility models. The study is carried out in the Guantanamo Province of Cuba, where 186 landslides were mapped using photo-interpretation. Twelve conditioning factors were mapped including geomorphology, geology, soils, landuse, slope angle, slope direction, internal relief, drainage density, distance from roads and faults, rainfall intensity, and ground peak acceleration. A methodology was used that subdivided the database in 3 subsets. A training set was used for updating the weights. A validation set was used to stop the training procedure when the network started losing generalization capability, and a test set was used to calculate the performance of the network. A 10-fold cross-validation was performed in order to show that the results are repeatable. The prediction capability, the robustness analysis, and the sensitivity analysis were tested on 10 mutually exclusive datasets. The results show that by means of artificial neural networks it is possible to obtain models with high prediction capability and high robustness, and that an exploration of the effect of the individual variables is possible, even if they are considered as a black-box model.

  20. Real Time Search Algorithm for Observation Outliers During Monitoring Engineering Constructions

    NASA Astrophysics Data System (ADS)

    Latos, Dorota; Kolanowski, Bogdan; Pachelski, Wojciech; Sołoducha, Ryszard

    2017-12-01

    Real time monitoring of engineering structures in case of an emergency of disaster requires collection of a large amount of data to be processed by specific analytical techniques. A quick and accurate assessment of the state of the object is crucial for a probable rescue action. One of the more significant evaluation methods of large sets of data, either collected during a specified interval of time or permanently, is the time series analysis. In this paper presented is a search algorithm for those time series elements which deviate from their values expected during monitoring. Quick and proper detection of observations indicating anomalous behavior of the structure allows to take a variety of preventive actions. In the algorithm, the mathematical formulae used provide maximal sensitivity to detect even minimal changes in the object's behavior. The sensitivity analyses were conducted for the algorithm of moving average as well as for the Douglas-Peucker algorithm used in generalization of linear objects in GIS. In addition to determining the size of deviations from the average it was used the so-called Hausdorff distance. The carried out simulation and verification of laboratory survey data showed that the approach provides sufficient sensitivity for automatic real time analysis of large amount of data obtained from different and various sensors (total stations, leveling, camera, radar).

  1. Do measures of depressive symptoms function differently in people with spinal cord injury versus primary care patients: the CES-D, PHQ-9, and PROMIS®-D.

    PubMed

    Cook, Karon F; Kallen, Michael A; Bombardier, Charles; Bamer, Alyssa M; Choi, Seung W; Kim, Jiseon; Salem, Rana; Amtmann, Dagmar

    2017-01-01

    To evaluate whether items of three measures of depressive symptoms function differently in persons with spinal cord injury (SCI) than in persons from a primary care sample. This study was a retrospective analysis of responses to the Patient Health Questionnaire depression scale, the Center for Epidemiological Studies Depression scale, and the National Institutes of Health Patient-Reported Outcomes Measurement Information System (PROMIS ® ) version 1.0 eight-item depression short form 8b (PROMIS-D). The presence of differential item function (DIF) was evaluated using ordinal logistic regression. No items of any of the three target measures were flagged for DIF based on standard criteria. In a follow-up sensitivity analyses, the criterion was changed to make the analysis more sensitive to potential DIF. Scores were corrected for DIF flagged under this criterion. Minimal differences were found between the original scores and those corrected for DIF under the sensitivity criterion. The three depression screening measures evaluated in this study did not perform differently in samples of individuals with SCI compared to general and community samples. Transdiagnostic symptoms did not appear to spuriously inflate depression severity estimates when administered to people with SCI.

  2. SPECIAL ISSUE ON OPTICAL PROCESSING OF INFORMATION: Transducers of physical fields based on two-channel coaxial optical fibres

    NASA Astrophysics Data System (ADS)

    Busurin, V. I.; Brazhnikova, T. Yu; Korobkov, V. V.; Prokhorov, N. I.

    1995-10-01

    An analysis is made of a general basic configuration and of the transfer function of a fibre-optic transducer based on controlled coupling in a multilayer two-channel coaxial optical fibre. The influence of the structure parameters and of external factors on the errors of a sensitive element in such a transducer is considered. The results are given of an investigation of the characteristics of a number of transducers constructed in accordance with the basic configuration.

  3. Analysis of Multiple Cracks in an Infinite Functionally Graded Plate

    NASA Technical Reports Server (NTRS)

    Shbeeb, N. I.; Binienda, W. K.; Kreider, K. L.

    1999-01-01

    A general methodology was constructed to develop the fundamental solution for a crack embedded in an infinite non-homogeneous material in which the shear modulus varies exponentially with the y coordinate. The fundamental solution was used to generate a solution to fully interactive multiple crack problems for stress intensity factors and strain energy release rates. Parametric studies were conducted for two crack configurations. The model displayed sensitivity to crack distance, relative angular orientation, and to the coefficient of nonhomogeneity.

  4. An empirical investigation on the forecasting ability of mallows model averaging in a macro economic environment

    NASA Astrophysics Data System (ADS)

    Yin, Yip Chee; Hock-Eam, Lim

    2012-09-01

    This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.

  5. Ferrographic analysis of wear particles from sliding elastohydrodynamic experiments

    NASA Technical Reports Server (NTRS)

    Jones, W. R., Jr.; Nagaraj, H. S.; Winer, W. O.

    1978-01-01

    The Ferrograph was used to analyze wear debris generated in a sliding elastohydrodynamic contact. The amount of wear debris correlates well with the ratio of film thickness to composite surface roughness (A ratio). The general wear level parameter and the wear severity index yielded similar correlations with average A ratios. Essentially all the generated wear particles were of the normal rubbing wear type. The Ferrograph was more sensitive in detecting the wear debris than was the commonly used emission spectrograph.

  6. A design methodology for nonlinear systems containing parameter uncertainty

    NASA Technical Reports Server (NTRS)

    Young, G. E.; Auslander, D. M.

    1983-01-01

    In the present design methodology for nonlinear systems containing parameter uncertainty, a generalized sensitivity analysis is incorporated which employs parameter space sampling and statistical inference. For the case of a system with j adjustable and k nonadjustable parameters, this methodology (which includes an adaptive random search strategy) is used to determine the combination of j adjustable parameter values which maximize the probability of those performance indices which simultaneously satisfy design criteria in spite of the uncertainty due to k nonadjustable parameters.

  7. Implementation of structural response sensitivity calculations in a large-scale finite-element analysis system

    NASA Technical Reports Server (NTRS)

    Giles, G. L.; Rogers, J. L., Jr.

    1982-01-01

    The implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calclating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of the system are also discussed.

  8. Translating the covenant: The behavior analyst as ambassador and translator

    PubMed Central

    Foxx, R. M.

    1996-01-01

    Behavior analysts should be sensitive to how others react to and interpret our language because it is inextricably related to our image. Our use of conceptual revision, with such terms as punishment, has created communicative confusion and hostility on the part of general and professional audiences we have attempted to influence. We must, therefore, adopt the role of ambassador and translator in the nonbehavioral world. A number of recommendations are offered for promoting, translating, and disseminating behavior analysis. PMID:22478256

  9. Translating the covenant: The behavior analyst as ambassador and translator.

    PubMed

    Foxx, R M

    1996-01-01

    Behavior analysts should be sensitive to how others react to and interpret our language because it is inextricably related to our image. Our use of conceptual revision, with such terms as punishment, has created communicative confusion and hostility on the part of general and professional audiences we have attempted to influence. We must, therefore, adopt the role of ambassador and translator in the nonbehavioral world. A number of recommendations are offered for promoting, translating, and disseminating behavior analysis.

  10. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrivastava, Manish; Zhao, Chun; Easter, Richard C.

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recentmore » work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance. This study highlights the large sensitivity of SOA loadings to the particle-phase transformation of SOA volatility, which is neglected in most previous models.« less

  11. General psychopathology in anorexia nervosa: the role of psychosocial factors.

    PubMed

    Karatzias, Thanos; Chouliara, Zoë; Power, Kevin; Collin, Paula; Yellowlees, Alex; Grierson, David

    2010-01-01

    The aim of the present study was to investigate psychosocial correlates of comorbid psychopathology. Data were collected from a total of 90 female inpatients with anorexia nervosa (AN). Higher levels of general psychopathology were detected in depression, interpersonal sensitivity, obsessive-compulsive and anxiety subscales of the Symptom Checklist (SCL)-90. Regression analysis also revealed that higher levels of psychopathology across SCL-90 subscales in AN patients are significantly associated with an earlier age of onset of the condition, higher levels of anorectic psychopathology as measured by Eating Disorders Examination, lower self-esteem as measured by Multidimensional Self-Esteem Inventory and social support levels as measured by Quality of Social Network and Social Support Questionnaire. Considering the high levels of general psychopathology in people with AN, routine clinical practice should aim for a comprehensive assessment of such. Given the strong association between psychosocial factors such as self-esteem, social support and general psychopathology, psychological therapies could play an important role in facilitating emotional recovery. Copyright © 2010 John Wiley & Sons, Ltd.

  12. Simulator study of conventional general aviation instrument displays in path-following tasks with emphasis on pilot-induced oscillations

    NASA Technical Reports Server (NTRS)

    Adams, J. J.

    1980-01-01

    A study of the use of conventional general aviation instruments by general aviation pilots in a six degree of freedom, fixed base simulator was conducted. The tasks performed were tracking a VOR radial and making an ILS approach to landing. A special feature of the tests was that the sensitivity of the displacement indicating instruments (the RMI, CDI, and HSI) was kept constant at values corresponding to 5 n. mi. and 1.25 n. mi. from the station. Both statistical and pilot model analyses of the data were made. The results show that performance in path following improved with increases in display sensitivity up to the highest sensitivity tested. At this maximum test sensitivity, which corresponds to the sensitivity existing at 1.25 n. mi. for the ILS glide slope transmitter, tracking accuracy was no better than it was at 5 n. mi. from the station and the pilot aircraft system exhibited a marked reduction in damping. In some cases, a pilot induced, long period unstable oscillation occurred.

  13. 41 CFR 109-1.5109 - Control of sensitive items.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... administrative control of sensitive items assigned for general use within an organizational unit as appropriate... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Control of sensitive...-INTRODUCTION 1.51-Personal Property Management Standards and Practices § 109-1.5109 Control of sensitive items...

  14. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    NASA Astrophysics Data System (ADS)

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  15. Extinction, survival or recovery of large predatory fishes

    PubMed Central

    Myers, Ransom A.; Worm, Boris

    2005-01-01

    Large predatory fishes have long played an important role in marine ecosystems and fisheries. Overexploitation, however, is gradually diminishing this role. Recent estimates indicate that exploitation has depleted large predatory fish communities worldwide by at least 90% over the past 50–100 years. We demonstrate that these declines are general, independent of methodology, and even higher for sensitive species such as sharks. We also attempt to predict the future prospects of large predatory fishes. (i) An analysis of maximum reproductive rates predicts the collapse and extinction of sensitive species under current levels of fishing mortality. Sensitive species occur in marine habitats worldwide and have to be considered in most management situations. (ii) We show that to ensure the survival of sensitive species in the northwest Atlantic fishing mortality has to be reduced by 40–80%. (iii) We show that rapid recovery of community biomass and diversity usually occurs when fishing mortality is reduced. However, recovery is more variable for single species, often because of the influence of species interactions. We conclude that management of multi-species fisheries needs to be tailored to the most sensitive, rather than the more robust species. This requires reductions in fishing effort, reduction in bycatch mortality and protection of key areas to initiate recovery of severely depleted communities. PMID:15713586

  16. Extinction, survival or recovery of large predatory fishes.

    PubMed

    Myers, Ransom A; Worm, Boris

    2005-01-29

    Large predatory fishes have long played an important role in marine ecosystems and fisheries. Overexploitation, however, is gradually diminishing this role. Recent estimates indicate that exploitation has depleted large predatory fish communities worldwide by at least 90% over the past 50-100 years. We demonstrate that these declines are general, independent of methodology, and even higher for sensitive species such as sharks. We also attempt to predict the future prospects of large predatory fishes. (i) An analysis of maximum reproductive rates predicts the collapse and extinction of sensitive species under current levels of fishing mortality. Sensitive species occur in marine habitats worldwide and have to be considered in most management situations. (ii) We show that to ensure the survival of sensitive species in the northwest Atlantic fishing mortality has to be reduced by 40-80%. (iii) We show that rapid recovery of community biomass and diversity usually occurs when fishing mortality is reduced. However, recovery is more variable for single species, often because of the influence of species interactions. We conclude that management of multi-species fisheries needs to be tailored to the most sensitive, rather than the more robust species. This requires reductions in fishing effort, reduction in bycatch mortality and protection of key areas to initiate recovery of severely depleted communities.

  17. Impact of the time scale of model sensitivity response on coupled model parameter estimation

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu

    2017-11-01

    That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.

  18. Sensitivity of macrobenthic secondary production to trawling in the English sector of the Greater North Sea: A biological trait approach

    NASA Astrophysics Data System (ADS)

    Bolam, S. G.; Coggan, R. C.; Eggleton, J.; Diesing, M.; Stephens, D.

    2014-01-01

    Demersal trawling constitutes the most significant human impact on both the structure and functioning of coastal seabed fauna. While a number of studies have assessed the impacts of trawling on faunal community structure and the degree to which different taxa are vulnerable to trawling, few have focused on how these impacts affect important ecological functions of the seabed. In this study, we use biological trait analysis (BTA) to assess the relative sensitivity of benthic macrofauna to trawling, in both the short- and long-term, and use this information to describe the spatial variation in sensitivity of secondary production for the Greater North Sea (GNS). Within the GNS, estimates of total production varied by almost three orders of magnitude, from 1.66 kJ m- 2 y- 1 to 968.9 kJ m- 2 y- 1. Large-scale patterns were observed in the proportion of secondary production derived from trawling-sensitive taxa. In the southern North Sea, total production is predominantly governed by taxa with low sensitivity to trawling, whereas production is relatively trawling-sensitive in the northern North Sea and western English Channel. In general, the more sensitive and productive regions are associated with poorly-sorted, gravelly or muddy sediments, while the less sensitive and less productive regions are associated with well-sorted, sandy substrates. These relationships between production sensitivity and environmental features are primarily due to variations in long-term recovery; total production of most assemblages is highly sensitive to the direct impacts of trawling. We discuss the implications of these findings for management 1decisions to improve the environmental sustainability of trawling.

  19. Moment-based metrics for global sensitivity analysis of hydrological systems

    NASA Astrophysics Data System (ADS)

    Dell'Oca, Aronne; Riva, Monica; Guadagnini, Alberto

    2017-12-01

    We propose new metrics to assist global sensitivity analysis, GSA, of hydrological and Earth systems. Our approach allows assessing the impact of uncertain parameters on main features of the probability density function, pdf, of a target model output, y. These include the expected value of y, the spread around the mean and the degree of symmetry and tailedness of the pdf of y. Since reliable assessment of higher-order statistical moments can be computationally demanding, we couple our GSA approach with a surrogate model, approximating the full model response at a reduced computational cost. Here, we consider the generalized polynomial chaos expansion (gPCE), other model reduction techniques being fully compatible with our theoretical framework. We demonstrate our approach through three test cases, including an analytical benchmark, a simplified scenario mimicking pumping in a coastal aquifer and a laboratory-scale conservative transport experiment. Our results allow ascertaining which parameters can impact some moments of the model output pdf while being uninfluential to others. We also investigate the error associated with the evaluation of our sensitivity metrics by replacing the original system model through a gPCE. Our results indicate that the construction of a surrogate model with increasing level of accuracy might be required depending on the statistical moment considered in the GSA. The approach is fully compatible with (and can assist the development of) analysis techniques employed in the context of reduction of model complexity, model calibration, design of experiment, uncertainty quantification and risk assessment.

  20. Cytochrome P450-mediated warfarin metabolic ability is not a critical determinant of warfarin sensitivity in avian species: In vitro assays in several birds and in vivo assays in chicken.

    PubMed

    Watanabe, Kensuke P; Kawata, Minami; Ikenaka, Yoshinori; Nakayama, Shouta M M; Ishii, Chihiro; Darwish, Wageh Sobhi; Saengtienchai, Aksorn; Mizukawa, Hazuki; Ishizuka, Mayumi

    2015-10-01

    Coumarin-derivative anticoagulant rodenticides used for rodent control are posing a serious risk to wild bird populations. For warfarin, a classic coumarin derivative, chickens have a high median lethal dose (LD50), whereas mammalian species generally have much lower LD50. Large interspecies differences in sensitivity to warfarin are to be expected. The authors previously reported substantial differences in warfarin metabolism among avian species; however, the actual in vivo pharmacokinetics have yet to be elucidated, even in the chicken. In the present study, the authors sought to provide an in-depth characterization of warfarin metabolism in birds using in vivo and in vitro approaches. A kinetic analysis of warfarin metabolism was performed using liver microsomes of 4 avian species, and the metabolic abilities of the chicken and crow were much higher in comparison with those of the mallard and ostrich. Analysis of in vivo metabolites from chickens showed that excretions predominantly consisted of 4'-hydroxywarfarin, which was consistent with the in vitro results. Pharmacokinetic analysis suggested that chickens have an unexpectedly long half-life despite showing high metabolic ability in vitro. The results suggest that the half-life of warfarin in other bird species could be longer than that in the chicken and that warfarin metabolism may not be a critical determinant of species differences with respect to warfarin sensitivity. © 2015 SETAC.

  1. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1993-01-01

    In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.

  2. Structural, functional and pH sensitive release characteristics of water-soluble polysaccharide from the seeds of Albizia lebbeck L.

    PubMed

    Kumar Varma, Chekuri Ashok; Jayaram Kumar, K

    2017-11-01

    Plant polysaccharides, generally regarded as safe (GRAS), are gaining importance as excipients in drug delivery. Therefore, the current paper presents the studies on structural, functional and drug release study of water soluble polysaccharide (ALPS) from seeds of Albizia lebbeck L. High swelling, water holding capacity, foam stability and lower moisture content suggests its use as additive in food preparations. The apparent molecular weight of polysaccharide was found to be 1.98×10 2 kDa. Monosaccharide composition analysis indicated that ALPS consists of mannose (4.06%), rhamnose (22.79%), glucose (38.9%), galactose (17.84%) and xylose (16.42%). Micromeritic properties revealed that the polysaccharide possess potential for pharmaceutical applications. From the surface charge analysis, ALPS was found to be non-ionic polysaccharide. Morphological study reveals the polysaccharide with irregular particle shape and rough surface. Fourier transformed infrared spectroscopy (FTIR) study confirms the carbohydrate nature of polysaccharide. From the thermogravimetric analysis (TGA) data, the second mass loss (243-340°C) attributed to polysaccharide degradation. The drug release profile reveals the use of polysaccharide for the preparation of pH sensitive pharmaceutical dosage forms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Sensitivity and specificity of univariate MRI analysis of experimentally degraded cartilage

    PubMed Central

    Lin, Ping-Chang; Reiter, David A.; Spencer, Richard G.

    2010-01-01

    MRI is increasingly used to evaluate cartilage in tissue constructs, explants, and animal and patient studies. However, while mean values of MR parameters, including T1, T2, magnetization transfer rate km, apparent diffusion coefficient ADC, and the dGEMRIC-derived fixed charge density, correlate with tissue status, the ability to classify tissue according to these parameters has not been explored. Therefore, the sensitivity and specificity with which each of these parameters was able to distinguish between normal and trypsin- degraded, and between normal and collagenase-degraded, cartilage explants were determined. Initial analysis was performed using a training set to determine simple group means to which parameters obtained from a validation set were compared. T1 and ADC showed the greatest ability to discriminate between normal and degraded cartilage. Further analysis with k-means clustering, which eliminates the need for a priori identification of sample status, generally performed comparably. Use of fuzzy c-means (FCM) clustering to define centroids likewise did not result in improvement in discrimination. Finally, a FCM clustering approach in which validation samples were assigned in a probabilistic fashion to control and degraded groups was implemented, reflecting the range of tissue characteristics seen with cartilage degradation. PMID:19705467

  4. DNA melting analysis: application of the "open tube" format for detection of mutant KRAS.

    PubMed

    Botezatu, Irina V; Kondratova, Valentina N; Shelepov, Valery P; Lichtenstein, Anatoly V

    2011-12-15

    High-resolution melting (HRM) analysis is a very effective method for genotyping and mutation scanning that is usually performed just after PCR amplification (the "closed tube" format). Though simple and convenient, the closed tube format makes the HRM dependent on the PCR mix, not generally optimal for DNA melting analysis. Here, the "open tube" format, namely the post-PCR optimization procedure (amplicon shortening and solution chemistry modification), is proposed. As a result, mutation scanning of short amplicons becomes feasible on a standard real-time PCR instrument (not primarily designed for HRM) using SYBR Green I. This approach has allowed us to considerably enhance the sensitivity of detecting mutant KRAS using both low- and high-resolution systems (the Bio-Rad iQ5-SYBR Green I and Bio-Rad CFX96-EvaGreen, respectively). The open tube format, though more laborious than the closed tube one, can be used in situations when maximal sensitivity of the method is needed. It also permits standardization of DNA melting experiments and the introduction of instruments of a "lower level" into the range of those suitable for mutation scanning. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Measurement Consistency from Magnetic Resonance Images

    PubMed Central

    Chung, Dongjun; Chung, Moo K.; Durtschi, Reid B.; Lindell, R. Gentry; Vorperian, Houri K.

    2010-01-01

    Rationale and Objectives In quantifying medical images, length-based measurements are still obtained manually. Due to possible human error, a measurement protocol is required to guarantee the consistency of measurements. In this paper, we review various statistical techniques that can be used in determining measurement consistency. The focus is on detecting a possible measurement bias and determining the robustness of the procedures to outliers. Materials and Methods We review correlation analysis, linear regression, Bland-Altman method, paired t-test, and analysis of variance (ANOVA). These techniques were applied to measurements, obtained by two raters, of head and neck structures from magnetic resonance images (MRI). Results The correlation analysis and the linear regression were shown to be insufficient for detecting measurement inconsistency. They are also very sensitive to outliers. The widely used Bland-Altman method is a visualization technique so it lacks the numerical quantification. The paired t-test tends to be sensitive to small measurement bias. On the other hand, ANOVA performs well even under small measurement bias. Conclusion In almost all cases, using only one method is insufficient and it is recommended to use several methods simultaneously. In general, ANOVA performs the best. PMID:18790405

  6. [Possibility of the species identification using blood stains located on the material evidences and bone fragments with the method of solid phase enzyme immunoassay with "IgG general-EIA-BEST" kit and human immunoglobulin G].

    PubMed

    Sidorov, V L; Shvetsova, I V; Isakova, I V

    2007-01-01

    The authors give the comparative analysis of Russian and foreign forensic medical methods of species character identification of the blood from the stains on the material evidences and bone fragments. It is shown that for this purpose it is feasible to apply human immunoglobulin G (IgG) and solid phase enzyme immunoassay (EIA) with the kit "IgG general-EIA-BEST". In comparison with the methods used in Russia this method is more sensitive, convenient for objective registration and computer processing. The results of experiments shown that it is possible to use the kit "IgG general-EIA-BEST" in forensic medicine for the species character identification of the blood from the stains on the material evidences and bone fragments.

  7. Application of the matching law to pitch selection in professional baseball.

    PubMed

    Cox, David J; Sosine, Jacob; Dallery, Jesse

    2017-04-01

    This study applied the generalized matching equation (GME) to pitch selection in professional baseball. The GME was fitted to the relation between pitch selection and hitter outcomes for five professional baseball pitchers during the 2014 Major League Baseball season. The GME described pitch selection well. Pitch allocation varied across different game contexts such as inning, count, and number of outs in a manner consistent with the GME. Finally, within games, bias decreased for four of the five pitchers and the sensitivity parameter increased for three of the five pitchers. The results extend the generality of the GME to multialternative natural sporting contexts, and demonstrate the influence of context on behavior in natural environments. © 2017 Society for the Experimental Analysis of Behavior.

  8. Biomass Allocation of Stoloniferous and Rhizomatous Plant in Response to Resource Availability: A Phylogenetic Meta-Analysis

    PubMed Central

    Xie, Xiu-Fang; Hu, Yu-Kun; Pan, Xu; Liu, Feng-Hong; Song, Yao-Bin; Dong, Ming

    2016-01-01

    Resource allocation to different functions is central in life-history theory. Plasticity of functional traits allows clonal plants to regulate their resource allocation to meet changing environments. In this study, biomass allocation traits of clonal plants were categorized into absolute biomass for vegetative growth vs. for reproduction, and their relative ratios based on a data set including 115 species and derived from 139 published literatures. We examined general pattern of biomass allocation of clonal plants in response to availabilities of resource (e.g., light, nutrients, and water) using phylogenetic meta-analysis. We also tested whether the pattern differed among clonal organ types (stolon vs. rhizome). Overall, we found that stoloniferous plants were more sensitive to light intensity than rhizomatous plants, preferentially allocating biomass to vegetative growth, aboveground part and clonal reproduction under shaded conditions. Under nutrient- and water-poor condition, rhizomatous plants were constrained more by ontogeny than by resource availability, preferentially allocating biomass to belowground part. Biomass allocation between belowground and aboveground part of clonal plants generally supported the optimal allocation theory. No general pattern of trade-off was found between growth and reproduction, and neither between sexual and clonal reproduction. Using phylogenetic meta-analysis can avoid possible confounding effects of phylogeny on the results. Our results shown the optimal allocation theory explained a general trend, which the clonal plants are able to plastically regulate their biomass allocation, to cope with changing resource availability, at least in stoloniferous and rhizomatous plants. PMID:27200071

  9. The Nature and Variability of Ensemble Sensitivity Fields that Diagnose Severe Convection

    NASA Astrophysics Data System (ADS)

    Ancell, B. C.

    2017-12-01

    Ensemble sensitivity analysis (ESA) is a statistical technique that uses information from an ensemble of forecasts to reveal relationships between chosen forecast metrics and the larger atmospheric state at various forecast times. A number of studies have employed ESA from the perspectives of dynamical interpretation, observation targeting, and ensemble subsetting toward improved probabilistic prediction of high-impact events, mostly at synoptic scales. We tested ESA using convective forecast metrics at the 2016 HWT Spring Forecast Experiment to understand the utility of convective ensemble sensitivity fields in improving forecasts of severe convection and its individual hazards. The main purpose of this evaluation was to understand the temporal coherence and general characteristics of convective sensitivity fields toward future use in improving ensemble predictability within an operational framework.The magnitude and coverage of simulated reflectivity, updraft helicity, and surface wind speed were used as response functions, and the sensitivity of these functions to winds, temperatures, geopotential heights, and dew points at different atmospheric levels and at different forecast times were evaluated on a daily basis throughout the HWT Spring Forecast experiment. These sensitivities were calculated within the Texas Tech real-time ensemble system, which possesses 42 members that run twice daily to 48-hr forecast time. Here we summarize both the findings regarding the nature of the sensitivity fields and the evaluation of the participants that reflects their opinions of the utility of operational ESA. The future direction of ESA for operational use will also be discussed.

  10. Using Dynamic Sensitivity Analysis to Assess Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey; Morell, Larry; Miller, Keith

    1990-01-01

    This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.

  11. Anxiety and feedback processing in a gambling task: Contributions of time-frequency theta and delta.

    PubMed

    Ellis, Jessica S; Watts, Adreanna T M; Schmidt, Norman; Bernat, Edward M

    2018-05-02

    The feedback negativity (FN) event-related potential (ERP) is widely studied during gambling feedback tasks. However, research on FN and anxiety is minimal and the findings are mixed. To clarify these discrepancies, the current study (N = 238) used time-frequency analysis to disentangle overlapping contributions of delta (0-3 Hz) and theta (3-7 Hz) to feedback processing in a clinically anxious sample, with severity assessed through general worry and physiological arousal scales. Greater general worry showed enhanced delta- and theta-FN broadly across both gain and loss conditions, with theta-FN stronger for losses. Regressions indicated delta-FN maintained unique effects, accounted for theta, and explained the blunted time domain FN for general worry. Increased delta was also associated with physiological arousal, but the effects were accounted for by general worry. Broadly, anxiety-related alterations in feedback processing can be explained by an overall heightened sensitivity to feedback as represented by enhanced delta-FN in relation to the general worry facet of anxiety. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Analytical performance, agreement and user-friendliness of six point-of-care testing urine analysers for urinary tract infection in general practice

    PubMed Central

    Schot, Marjolein J C; van Delft, Sanne; Kooijman-Buiting, Antoinette M J; de Wit, Niek J; Hopstaken, Rogier M

    2015-01-01

    Objective Various point-of-care testing (POCT) urine analysers are commercially available for routine urine analysis in general practice. The present study compares analytical performance, agreement and user-friendliness of six different POCT urine analysers for diagnosing urinary tract infection in general practice. Setting All testing procedures were performed at a diagnostic centre for primary care in the Netherlands. Urine samples were collected at four general practices. Primary and secondary outcome measures Analytical performance and agreement of the POCT analysers regarding nitrite, leucocytes and erythrocytes, with the laboratory reference standard, was the primary outcome measure, and analysed by calculating sensitivity, specificity, positive and negative predictive value, and Cohen's κ coefficient for agreement. Secondary outcome measures were the user-friendliness of the POCT analysers, in addition to other characteristics of the analysers. Results The following six POCT analysers were evaluated: Uryxxon Relax (Macherey Nagel), Urisys 1100 (Roche), Clinitek Status (Siemens), Aution 11 (Menarini), Aution Micro (Menarini) and Urilyzer (Analyticon). Analytical performance was good for all analysers. Compared with laboratory reference standards, overall agreement was good, but differed per parameter and per analyser. Concerning the nitrite test, the most important test for clinical practice, all but one showed perfect agreement with the laboratory standard. For leucocytes and erythrocytes specificity was high, but sensitivity was considerably lower. Agreement for leucocytes varied between good to very good, and for the erythrocyte test between fair and good. First-time users indicated that the analysers were easy to use. They expected higher productivity and accuracy when using these analysers in daily practice. Conclusions The overall performance and user-friendliness of all six commercially available POCT urine analysers was sufficient to justify routine use in suspected urinary tract infections in general practice. PMID:25986635

  13. The Effects of Temperature and Salinity on Mg Incorporation in Planktonic Foraminifera Globigerinoides ruber (white): Results from a Global Sediment Trap Mg/Ca Database

    NASA Astrophysics Data System (ADS)

    Gray, W. R.; Weldeab, S.; Lea, D. W.

    2015-12-01

    Mg/Ca in Globigerinoides ruber is arguably the most important proxy for sea surface temperature (SST) in tropical and sub tropical regions, and as such guides our understanding of past climatic change in these regions. However, the sensitivity of Mg/Ca to salinity is debated; while analysis of foraminifera grown in cultures generally indicates a sensitivity of 3 - 6% per salinity unit, core-top studies have suggested a much higher sensitivity of between 15 - 27% per salinity unit, bringing the utility of Mg/Ca as a SST proxy into dispute. Sediment traps circumvent the issues of dissolution and post-depositional calcite precipitation that hamper core-top calibration studies, whilst allowing the analysis of foraminifera that have calcified under natural conditions within a well constrained period of time. We collated previously published sediment trap/plankton tow G. ruber (white) Mg/Ca data, and generated new Mg/Ca data from a sediment trap located in the highly-saline tropical North Atlantic, close to West Africa. Calcification temperature and salinity were calculated for the time interval represented by each trap/tow sample using World Ocean Atlas 2013 data. The resulting dataset comprises >240 Mg/Ca measurements (in the size fraction 150 - 350 µm), that span a temperature range of 18 - 28 °C and 33.6 - 36.7 PSU. Multiple regression of the dataset reveals a temperature sensitivity of 7 ± 0.4% per °C (p < 2.2*10-16) and a salinity sensitivity of 4 ± 1% per salinity unit (p = 2*10-5). Application of this calibration has significant implications for both the magnitude and timing of glacial-interglacial temperature changes when variations in salinity are accounted for.

  14. Optimizing Complexity Measures for fMRI Data: Algorithm, Artifact, and Sensitivity

    PubMed Central

    Rubin, Denis; Fekete, Tomer; Mujica-Parodi, Lilianne R.

    2013-01-01

    Introduction Complexity in the brain has been well-documented at both neuronal and hemodynamic scales, with increasing evidence supporting its use in sensitively differentiating between mental states and disorders. However, application of complexity measures to fMRI time-series, which are short, sparse, and have low signal/noise, requires careful modality-specific optimization. Methods Here we use both simulated and real data to address two fundamental issues: choice of algorithm and degree/type of signal processing. Methods were evaluated with regard to resilience to acquisition artifacts common to fMRI as well as detection sensitivity. Detection sensitivity was quantified in terms of grey-white matter contrast and overlap with activation. We additionally investigated the variation of complexity with activation and emotional content, optimal task length, and the degree to which results scaled with scanner using the same paradigm with two 3T magnets made by different manufacturers. Methods for evaluating complexity were: power spectrum, structure function, wavelet decomposition, second derivative, rescaled range, Higuchi’s estimate of fractal dimension, aggregated variance, and detrended fluctuation analysis. To permit direct comparison across methods, all results were normalized to Hurst exponents. Results Power-spectrum, Higuchi’s fractal dimension, and generalized Hurst exponent based estimates were most successful by all criteria; the poorest-performing measures were wavelet, detrended fluctuation analysis, aggregated variance, and rescaled range. Conclusions Functional MRI data have artifacts that interact with complexity calculations in nontrivially distinct ways compared to other physiological data (such as EKG, EEG) for which these measures are typically used. Our results clearly demonstrate that decisions regarding choice of algorithm, signal processing, time-series length, and scanner have a significant impact on the reliability and sensitivity of complexity estimates. PMID:23700424

  15. Singularity-sensitive gauge-based radar rainfall adjustment methods for urban hydrological applications

    NASA Astrophysics Data System (ADS)

    Wang, L.-P.; Ochoa-Rodríguez, S.; Onof, C.; Willems, P.

    2015-09-01

    Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field) that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive) technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2) (Edinburgh, UK) during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban drainage system's dynamics, particularly of peak runoff flows.

  16. Medical student quality-of-life in the clerkships: a scale validation study.

    PubMed

    Brannick, Michael T; Horn, Gregory T; Schnaus, Michael J; Wahi, Monika M; Goldin, Steven B

    2015-04-01

    Many aspects of medical school are stressful for students. To empirically assess student reactions to clerkship programs, or to assess efforts to improve such programs, educators must measure the overall well-being of the students reliably and validly. The purpose of the study was to develop and validate a measure designed to achieve these goals. The authors developed a measure of quality of life for medical students by sampling (public domain) items tapping general happiness, fatigue, and anxiety. A quality-of-life scale was developed by factor analyzing responses to the items from students in two different clerkships from 2005 to 2008. Reliability was assessed using Cronbach's alpha. Validity was assessed by factor analysis, convergence with additional theoretically relevant scales, and sensitivity to change over time. The refined nine-item measure is a Likert scaled survey of quality-of-life items comprised of two domains: exhaustion and general happiness. The resulting scale demonstrated good reliability and factorial validity at two time points for each of the two samples. The quality-of-life measure also correlated with measures of depression and the amount of sleep reported during the clerkships. The quality-of-life measure appeared more sensitive to changes over time than did the depression measure. The measure is short and can be easily administered in a survey. The scale appears useful for program evaluation and more generally as an outcome variable in medical educational research.

  17. Examining the ethnoracial invariance of a bifactor model of anxiety sensitivity and the incremental validity of the physical domain-specific factor in a primary-care patient sample.

    PubMed

    Fergus, Thomas A; Kelley, Lance P; Griggs, Jackson O

    2017-10-01

    There is growing support for a bifactor conceptualization of the Anxiety Sensitivity Index-3 (ASI-3; Taylor et al., 2007), consisting of a General factor and 3 domain-specific factors (i.e., Physical, Cognitive, Social). Earlier studies supporting a bifactor model of the ASI-3 used samples that consisted of predominantly White respondents. In addition, extant research has yet to support the incremental validity of the Physical domain-specific factor while controlling for the General factor. The present study is an examination of a bifactor model of the ASI-3 and the measurement invariance of that model among an ethnoracially diverse sample of primary-care patients (N = 533). Results from multiple-group confirmatory factor analysis supported the configural and metric/scalar invariance of the bifactor model of the ASI-3 across self-identifying Black, Latino, and White respondents. The Physical domain-specific factor accounted for unique variance in an index of health anxiety beyond the General factor. These results provide support for the generalizability of a bifactor model of the ASI-3 across 3 ethnoracial groups, as well as indication of the incremental explanatory power of the Physical domain-specific factor. Study implications are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. On the sensitivity of mesoscale models to surface-layer parameterization constants

    NASA Astrophysics Data System (ADS)

    Garratt, J. R.; Pielke, R. A.

    1989-09-01

    The Colorado State University standard mesoscale model is used to evaluate the sensitivity of one-dimensional (1D) and two-dimensional (2D) fields to differences in surface-layer parameterization “constants”. Such differences reflect the range in the published values of the von Karman constant, Monin-Obukhov stability functions and the temperature roughness length at the surface. The sensitivity of 1D boundary-layer structure, and 2D sea-breeze intensity, is generally less than that found in published comparisons related to turbulence closure schemes generally.

  19. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  20. Surface flaw reliability analysis of ceramic components with the SCARE finite element postprocessor program

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, John P.; Nemeth, Noel N.

    1987-01-01

    The SCARE (Structural Ceramics Analysis and Reliability Evaluation) computer program on statistical fast fracture reliability analysis with quadratic elements for volume distributed imperfections is enhanced to include the use of linear finite elements and the capability of designing against concurrent surface flaw induced ceramic component failure. The SCARE code is presently coupled as a postprocessor to the MSC/NASTRAN general purpose, finite element analysis program. The improved version now includes the Weibull and Batdorf statistical failure theories for both surface and volume flaw based reliability analysis. The program uses the two-parameter Weibull fracture strength cumulative failure probability distribution model with the principle of independent action for poly-axial stress states, and Batdorf's shear-sensitive as well as shear-insensitive statistical theories. The shear-sensitive surface crack configurations include the Griffith crack and Griffith notch geometries, using the total critical coplanar strain energy release rate criterion to predict mixed-mode fracture. Weibull material parameters based on both surface and volume flaw induced fracture can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and grouped fracture data. The statistical fast fracture theories for surface flaw induced failure, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.

  1. Multiparameter flow cytometric analysis of a pH sensitive formyl peptide with application to receptor structure and processing kinetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fay, S.P.; Domalewski, M.D.; Houghton, T.G.

    1994-02-01

    Environmentally sensitive molecules have many potential cellular applications. The authors have investigated the utility of a pH sensitive ligand for the formyl peptide receptor, CHO-Met-Leu-Phe-Phe-Lys (SNAFL)-OH (SNAFL-seminaphthofluorescein), because in previous studies protonation has been used to explain the quenching when the fluorescinated formyl pentapeptide ligand binds to this receptor. Moreover, acidification in intracellular compartments is a general mechanism occurring in cells during processing of ligand-receptor complexes. Because the protonated form of SNAFL is excited at 488 nm with emission at 530 nm and the unprotonated form is excited at 568 nm with emission at 650 nm, the ratio of protonatedmore » and unprotonated forms can be examined by multiparameter flow cytometry. The authors found that the receptor-bound ligand is sensitive to both the extracellular and intracellular pH. There is a small increase in the pK[sub a] of the ligand upon binding to the receptor consistent with protonation in the binding pocket. Once internalized, spectral changes in the probe consistent with acidification and ligand dissociation from the receptor are observed. 22 refs., 4 figs.« less

  2. Development of a Multilevel Optimization Approach to the Design of Modern Engineering Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Barthelemy, J. F. M.

    1983-01-01

    A general algorithm is proposed which carries out the design process iteratively, starting at the top of the hierarchy and proceeding downward. Each subproblem is optimized separately for fixed controls from higher level subproblems. An optimum sensitivity analysis is then performed which determines the sensitivity of the subproblem design to changes in higher level subproblem controls. The resulting sensitivity derivatives are used to construct constraints which force the controlling subproblems into chosing their own designs so as to improve the lower levels subproblem designs while satisfying their own constraints. The applicability of the proposed algorithm is demonstrated by devising a four-level hierarchy to perform the simultaneous aerodynamic and structural design of a high-performance sailplane wing for maximum cross-country speed. Finally, the concepts discussed are applied to the two-level minimum weight structural design of the sailplane wing. The numerical experiments show that discontinuities in the sensitivity derivatives may delay convergence, but that the algorithm is robust enough to overcome these discontinuities and produce low-weight feasible designs, regardless of whether the optimization is started from the feasible space or the infeasible one.

  3. The impact of task demand on visual word recognition.

    PubMed

    Yang, J; Zevin, J

    2014-07-11

    The left occipitotemporal cortex has been found sensitive to the hierarchy of increasingly complex features in visually presented words, from individual letters to bigrams and morphemes. However, whether this sensitivity is a stable property of the brain regions engaged by word recognition is still unclear. To address the issue, the current study investigated whether different task demands modify this sensitivity. Participants viewed real English words and stimuli with hierarchical word-likeness while performing a lexical decision task (i.e., to decide whether each presented stimulus is a real word) and a symbol detection task. General linear model and independent component analysis indicated strong activation in the fronto-parietal and temporal regions during the two tasks. Furthermore, the bilateral inferior frontal gyrus and insula showed significant interaction effects between task demand and stimulus type in the pseudoword condition. The occipitotemporal cortex showed strong main effects for task demand and stimulus type, but no sensitivity to the hierarchical word-likeness was found. These results suggest that different task demands on semantic, phonological and orthographic processes can influence the involvement of the relevant regions during visual word recognition. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  4. Sensitivity Analysis and Optimization of Aerodynamic Configurations with Blend Surfaces

    NASA Technical Reports Server (NTRS)

    Thomas, A. M.; Tiwari, S. N.

    1997-01-01

    A novel (geometrical) parametrization procedure using solutions to a suitably chosen fourth order partial differential equation is used to define a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. The general airplane configuration has wing, fuselage, vertical tail and horizontal tail. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. A graphic interface software is developed which dynamically changes the surface of the airplane configuration with the change in input design variable. The software is made user friendly and is targeted towards the initial conceptual development of any aerodynamic configurations. Grid sensitivity with respect to surface design parameters and aerodynamic sensitivity coefficients based on potential flow is obtained using an Automatic Differentiation precompiler software tool ADIFOR. Aerodynamic shape optimization of the complete aircraft with twenty four design variables is performed. Unstructured and structured volume grids and Euler solutions are obtained with standard software to demonstrate the feasibility of the new surface definition.

  5. Mass Spectrometry on Future Mars Landers

    NASA Technical Reports Server (NTRS)

    Brinckerhoff, W. B.; Mahaffy, P. R.

    2011-01-01

    Mass spectrometry investigations on the 2011 Mars Science Laboratory (MSL) and the 2018 ExoMars missions will address core science objectives related to the potential habitability of their landing site environments and more generally the near-surface organic inventory of Mars. The analysis of complex solid samples by mass spectrometry is a well-known approach that can provide a broad and sensitive survey of organic and inorganic compounds as well as supportive data for mineralogical analysis. The science value of such compositional information is maximized when one appreciates the particular opportunities and limitations of in situ analysis with resource-constrained instrumentation in the context of a complete science payload and applied to materials found in a particular environment. The Sample Analysis at Mars (SAM) investigation on MSL and the Mars Organic Molecule Analyzer (MOMA) investigation on ExoMars will thus benefit from and inform broad-based analog field site work linked to the Mars environments where such analysis will occur.

  6. Cost/benefit analysis of advanced materials technology candidates for the 1980's, part 2

    NASA Technical Reports Server (NTRS)

    Dennis, R. E.; Maertins, H. F.

    1980-01-01

    Cost/benefit analyses to evaluate advanced material technologies projects considered for general aviation and turboprop commuter aircraft through estimated life-cycle costs, direct operating costs, and development costs are discussed. Specifically addressed is the selection of technologies to be evaluated; development of property goals; assessment of candidate technologies on typical engines and aircraft; sensitivity analysis of the changes in property goals on performance and economics, cost, and risk analysis for each technology; and ranking of each technology by relative value. The cost/benefit analysis was applied to a domestic, nonrevenue producing, business-type jet aircraft configured with two TFE731-3 turbofan engines, and to a domestic, nonrevenue producing, business type turboprop aircraft configured with two TPE331-10 turboprop engines. In addition, a cost/benefit analysis was applied to a commercial turboprop aircraft configured with a growth version of the TPE331-10.

  7. Behavioral economics and regulatory analysis.

    PubMed

    Robinson, Lisa A; Hammitt, James K

    2011-09-01

    Behavioral economics has captured the interest of scholars and the general public by demonstrating ways in which individuals make decisions that appear irrational. While increasing attention is being focused on the implications of this research for the design of risk-reducing policies, less attention has been paid to how it affects the economic valuation of policy consequences. This article considers the latter issue, reviewing the behavioral economics literature and discussing its implications for the conduct of benefit-cost analysis, particularly in the context of environmental, health, and safety regulations. We explore three concerns: using estimates of willingness to pay or willingness to accept compensation for valuation, considering the psychological aspects of risk when valuing mortality-risk reductions, and discounting future consequences. In each case, we take the perspective that analysts should avoid making judgments about whether values are "rational" or "irrational." Instead, they should make every effort to rely on well-designed studies, using ranges, sensitivity analysis, or probabilistic modeling to reflect uncertainty. More generally, behavioral research has led some to argue for a more paternalistic approach to policy analysis. We argue instead for continued focus on describing the preferences of those affected, while working to ensure that these preferences are based on knowledge and careful reflection. © 2011 Society for Risk Analysis.

  8. Prevalence of allergic sensitization in the U.S.: Results from the National Health and Nutrition Examination Survey (NHANES) 2005–2006

    PubMed Central

    Salo, Päivi M.; Arbes, Samuel J.; Jaramillo, Renee; Calatroni, Agustin; Weir, Charles H.; Sever, Michelle L.; Hoppin, Jane A.; Rose, Kathryn M.; Liu, Andrew H.; Gergen, Peter J.; Mitchell, Herman E.; Zeldin, Darryl C.

    2014-01-01

    Background Allergic sensitization is an important risk factor for the development of atopic disease. The National Health and Nutrition Examination Survey (NHANES) 2005–2006 provides the most comprehensive information on IgE-mediated sensitization in the general US population. Objective We investigated clustering, sociodemographic and regional patterns of allergic sensitization and examined risk factors associated with IgE-mediated sensitization. Methods Data for this cross-sectional analysis were obtained from NHANES 2005–2006. Participants aged ≥1 year (N=9440) were tested for sIgEs to inhalant and food allergens; participants ≥6 years were tested for 19 sIgEs, and children aged 1–5 years for 9 sIgEs. Serum samples were analyzed using the ImmunoCAP System. Information on demographics and participant characteristics was collected by questionnaire. Results Of the study population aged 6 and older, 44.6% had detectable sIgEs, while 36.2% of children aged 1–5 years were sensitized to ≥1 allergen. Allergen-specific IgEs clustered into 7 groups that might have largely reflected biological cross-reactivity. Although sensitization to individual allergens and allergen types showed regional variation, the overall prevalence of sensitization did not differ across census regions, except in early childhood. In multivariate modeling, young age, male gender, non-Hispanic black race/ethnicity, geographic location (census region), and reported pet avoidance measures were most consistently associated with IgE-mediated sensitization. Conclusions The overall prevalence of allergic sensitization does not vary across US census regions, except in early life, although allergen-specific sensitization differs by sociodemographic and regional factors. Biological cross-reactivity may be an important, but not a sole, contributor to the clustering of allergen-specific IgEs. Clinical implications IgE-mediated sensitization shows clustering patterns and differs by sociodemographic and regional factors, but the overall prevalence of sensitization may not vary across US census regions. PMID:24522093

  9. The insulin-sensitivity sulphonylurea receptor variant is associated with thyrotoxic paralysis.

    PubMed

    Rolim, Ana Luiza R; Lindsey, Susan C; Kunii, Ilda S; Crispim, Felipe; Moisés, Regina Célia M S; Maciel, Rui M B; Dias-da-Silva, Magnus R

    2014-10-01

    Thyrotoxicosis is the most common cause of the acquired flaccid muscle paralysis in adults called thyrotoxic periodic paralysis (TPP) and is characterised by transient hypokalaemia and hypophosphataemia under high thyroid hormone levels that is frequently precipitated by carbohydrate load. The sulphonylurea receptor 1 (SUR1 (ABCC8)) is an essential regulatory subunit of the β-cell ATP-sensitive K(+) channel that controls insulin secretion after feeding. Additionally, the SUR1 Ala1369Ser variant appears to be associated with insulin sensitivity. We examined the ABCC8 gene at the single nucleotide level using PCR-restriction fragment length polymorphism (RFLP) analysis to determine its allelic variant frequency and calculated the frequency of the Ala1369Ser C-allele variant in a cohort of 36 Brazilian TPP patients in comparison with 32 controls presenting with thyrotoxicosis without paralysis (TWP). We verified that the frequency of the alanine 1369 C-allele was significantly higher in TPP patients than in TWP patients (61.1 vs 34.4%, odds ratio (OR)=3.42, P=0.039) and was significantly more common than the minor allele frequency observed in the general population from the 1000 Genomes database (61.1 vs 29.0%, OR=4.87, P<0.005). Additionally, the C-allele frequency was similar between TWP patients and the general population (34.4 vs 29%, OR=1.42, P=0.325). We have demonstrated that SUR1 alanine 1369 variant is associated with allelic susceptibility to TPP. We suggest that the hyperinsulinaemia that is observed in TPP may be linked to the ATP-sensitive K(+)/SUR1 alanine variant and, therefore, contribute to the major feedforward precipitating factors in the pathophysiology of TPP. © 2014 Society for Endocrinology.

  10. Interpersonal sensitivity and persistent attenuated psychotic symptoms in adolescence.

    PubMed

    Masillo, Alice; Brandizzi, M; Valmaggia, L R; Saba, R; Lo Cascio, N; Lindau, J F; Telesforo, L; Venturini, P; Montanaro, D; Di Pietro, D; D'Alema, M; Girardi, P; Fiori Nastro, P

    2018-03-01

    Interpersonal sensitivity defines feelings of inner-fragility in the presence of others due to the expectation of criticism or rejection. Interpersonal sensitivity was found to be related to attenuated positive psychotic symptom during the prodromal phase of psychosis. The aims of this study were to examine if high level of interpersonal sensitivity at baseline are associated with the persistence of attenuated positive psychotic symptoms and general psychopathology at 18-month follow-up. A sample of 85 help-seeking individuals (mean age = 16.6, SD = 5.05) referred an Italian early detection project, completed the interpersonal sensitivity measure and the structured interview for prodromal symptoms (SIPS) at baseline and were assessed at 18-month follow-up using the SIPS. Results showed that individuals with high level of interpersonal sensitivity at baseline reported high level of attenuated positive psychotic symptoms (i.e., unusual thought content) and general symptoms (i.e., depression, irritability and low tolerance to daily stress) at follow-up. This study suggests that being "hypersensitive" to interpersonal interactions is a psychological feature associated with attenuated positive psychotic symptoms and general symptoms, such as depression and irritability, at 18-month follow-up. Assessing and treating inner-self fragilities may be an important step of early detection program to avoid the persistence of subtle but very distressing long-terms symptoms.

  11. Analysis of observed surface ozone in the dry season over Eastern Thailand during 1997-2012

    NASA Astrophysics Data System (ADS)

    Assareh, Nosha; Prabamroong, Thayukorn; Manomaiphiboon, Kasemsan; Theramongkol, Phunsak; Leungsakul, Sirakarn; Mitrjit, Nawarat; Rachiwong, Jintarat

    2016-09-01

    This study analyzed observed surface ozone (O3) in the dry season over a long-term period of 1997-2012 for the eastern region of Thailand and incorporated several technical tools or methods in investigating different aspects of O3. The focus was the urbanized and industrialized coastal areas recently recognized as most O3-polluted areas. It was found that O3 is intensified most in the dry-season months when meteorological conditions are favorable to O3 development. The diurnal variations of O3 and its precursors show the general patterns of urban background. From observational O3 isopleth diagrams and morning ratios of non-methane volatile organic compounds (NMVOC) and nitrogen oxides (NOx), the chemical regime of O3 formation was identified as VOC-sensitive, and the degree of VOC sensitivity tends to increase over the years, suggesting emission control on VOC to be suitable for O3 management. Both total oxidant analysis and back-trajectory modeling (together with K-means clustering) indicate the potential role of regional transport or influence in enhancing surface O3 level over the study areas. A meteorological adjustment with generalized linear modeling was performed to statistically exclude meteorological effects on the variability of O3. Local air-mass recirculation factor was included in the modeling to support the coastal application. The derived trends in O3 based on the meteorological adjustment were found to be significantly positive using a Mann-Kendall test with block bootstrapping.

  12. Impact of radiofrequency ablation for patients with varicose veins on the budget of the German statutory health insurance system

    PubMed Central

    2013-01-01

    Objectives In contrast to other countries, surgery still represents the common invasive treatment for varicose veins in Germany. However, radiofrequency ablation, e.g. ClosureFast, becomes more and more popular in other countries due to potential better results and reduced side effects. This treatment option may cause less follow-up costs and is a more convenient procedure for patients, which could justify an introduction in the statutory benefits catalogue. Therefore, we aim at calculating the budget impact of a general reimbursement of ClosureFast in Germany. Methods To assess the budget impact of including ClosureFast in the German statutory benefits catalogue, we developed a multi-cohort Markov model and compared the costs of a “World with ClosureFast” with a “World without ClosureFast” over a time horizon of five years. To address the uncertainty of input parameters, we conducted three different types of sensitivity analysis (one-way, scenario, probabilistic). Results In the Base Case scenario, the introduction of the ClosureFast system for the treatment of varicose veins saves costs of about 19.1 Mio. € over a time horizon of five years in Germany. However, the results scatter in the sensitivity analyses due to limited evidence of some key input parameters. Conclusions Results of the budget impact analysis indicate that a general reimbursement of ClosureFast has the potential to be cost-saving in the German Statutory Health Insurance. PMID:23551943

  13. Bayesian generalized least squares regression with application to log Pearson type 3 regional skew estimation

    NASA Astrophysics Data System (ADS)

    Reis, D. S.; Stedinger, J. R.; Martins, E. S.

    2005-10-01

    This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.

  14. The use of the sexual function questionnaire as a screening tool for women with sexual dysfunction.

    PubMed

    Quirk, Frances; Haughie, Scott; Symonds, Tara

    2005-07-01

    To determine if the validated Sexual Function Questionnaire (SFQ), developed to assess efficacy in female sexual dysfunction (FSD) clinical trials, may also have utility in identifying target populations for such studies. Data from five clinical trials and two general population surveys were used to analyze the utility of the SFQ as a tool to discriminate between the presence of specific components of FSD (i.e., hypoactive sexual desire disorder, female sexual arousal disorder, female orgasmic disorder, and dyspareunia). Sensitivity/specificity analysis and logistic regression analysis, using data from all five clinical studies and the general population surveys, confirmed that the SFQ domains have utility in detecting the presence of specific components of FSD and provide scores indicative of the presence of a specific sexual disorder. The SFQ is a valuable new tool for detecting the presence of FSD and identifying the specific components of sexual functions affected (desire, arousal, orgasm, or dyspareunia).

  15. Public Reporting of MRI of the Lumbar Spine for Low Back Pain and Changes in Clinical Documentation.

    PubMed

    Flug, Jonathan A; Lind, Kimberly E

    2017-12-01

    OP-8 is the Medicare imaging efficiency metric for MRI of the lumbar spine for low back pain in the outpatient hospital. We studied trends in exclusion criteria coding over time by site of service after implementation of OP-8 to evaluate provider's response to public reporting. We conducted a secondary data analysis using the Medicare Limited Data Set 5% sample for beneficiaries with MRI lumbar spine and lower back pain during 2009 to 2014. We evaluated the association between excluding condition prevalence and site by using generalized estimating equations regression. We produced model-based estimates of excluding condition prevalence by site and year. As a sensitivity analysis, we repeated the analysis while including additional conditions in the outcome measure. We included 285,911 MRIs of the lumbar spine for low back pain. Generalized estimating equations regression found that outpatient hospitals had a higher proportion of MRIs with at least one excluding condition documented compared with outpatient clinics (P < .05), but increases in excluding condition prevalence were similar across all sites during 2009 to 2014. Our results were not sensitive to the inclusion of additional conditions. Documentation of excluding conditions and other clinically reasonable exclusions for OP-8 increased over time for outpatient hospitals and clinics. Increases in documentation of comorbidities may not translate to actual improvement in imaging appropriateness for low back pain. When accounting for all relevant conditions, the proportion of patients with low back pain considered uncomplicated and being measured by OP-8 would be small, reflecting a small proportion of patients with low back pain. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  16. Retrieval of Aerosol Optical Depth Above Clouds from OMI Observations: Sensitivity Analysis, Case Studies

    NASA Technical Reports Server (NTRS)

    Torres, O.; Jethva, H.; Bhartia, P. K.

    2012-01-01

    A large fraction of the atmospheric aerosol load reaching the free troposphere is frequently located above low clouds. Most commonly observed aerosols above clouds are carbonaceous particles generally associated with biomass burning and boreal forest fires, and mineral aerosols originated in arid and semi-arid regions and transported across large distances, often above clouds. Because these aerosols absorb solar radiation, their role in the radiative transfer balance of the earth atmosphere system is especially important. The generally negative (cooling) top of the atmosphere direct effect of absorbing aerosols, may turn into warming when the light-absorbing particles are located above clouds. The actual effect depends on the aerosol load and the single scattering albedo, and on the geometric cloud fraction. In spite of its potential significance, the role of aerosols above clouds is not adequately accounted for in the assessment of aerosol radiative forcing effects due to the lack of measurements. In this paper we discuss the basis of a simple technique that uses near-UV observations to simultaneously derive the optical depth of both the aerosol layer and the underlying cloud for overcast conditions. The two-parameter retrieval method described here makes use of the UV aerosol index and reflectance measurements at 388 nm. A detailed sensitivity analysis indicates that the measured radiances depend mainly on the aerosol absorption exponent and aerosol-cloud separation. The technique was applied to above-cloud aerosol events over the Southern Atlantic Ocean yielding realistic results as indicated by indirect evaluation methods. An error analysis indicates that for typical overcast cloudy conditions and aerosol loads, the aerosol optical depth can be retrieved with an accuracy of approximately 54% whereas the cloud optical depth can be derived within 17% of the true value.

  17. A blessing and a curse? Political institutions in the growth and decay of generalized trust: a cross-national panel analysis, 1980-2009.

    PubMed

    Robbins, Blaine G

    2012-01-01

    Despite decades of research on social capital, studies that explore the relationship between political institutions and generalized trust-a key element of social capital-across time are sparse. To address this issue, we use various cross-national public-opinion data sets including the World Values Survey and employ pooled time-series OLS regression and fixed- and random-effects estimation techniques on an unbalanced panel of 74 countries and 248 observations spread over a 29-year time period. With these data and methods, we investigate the impact of five political-institutional factors-legal property rights, market regulations, labor market regulations, universality of socioeconomic provisions, and power-sharing capacity-on generalized trust. We find that generalized trust increases monotonically with the quality of property rights institutions, that labor market regulations increase generalized trust, and that power-sharing capacity of the state decreases generalized trust. While generalized trust increases as the government regulation of credit, business, and economic markets decreases and as the universality of socioeconomic provisions increases, both effects appear to be more sensitive to the countries included and the modeling techniques employed than the other political-institutional factors. In short, we find that political institutions simultaneously promote and undermine generalized trust.

  18. A Blessing and a Curse? Political Institutions in the Growth and Decay of Generalized Trust: A Cross-National Panel Analysis, 1980–2009

    PubMed Central

    Robbins, Blaine G.

    2012-01-01

    Despite decades of research on social capital, studies that explore the relationship between political institutions and generalized trust–a key element of social capital–across time are sparse. To address this issue, we use various cross-national public-opinion data sets including the World Values Survey and employ pooled time-series OLS regression and fixed- and random-effects estimation techniques on an unbalanced panel of 74 countries and 248 observations spread over a 29-year time period. With these data and methods, we investigate the impact of five political-institutional factors–legal property rights, market regulations, labor market regulations, universality of socioeconomic provisions, and power-sharing capacity–on generalized trust. We find that generalized trust increases monotonically with the quality of property rights institutions, that labor market regulations increase generalized trust, and that power-sharing capacity of the state decreases generalized trust. While generalized trust increases as the government regulation of credit, business, and economic markets decreases and as the universality of socioeconomic provisions increases, both effects appear to be more sensitive to the countries included and the modeling techniques employed than the other political-institutional factors. In short, we find that political institutions simultaneously promote and undermine generalized trust. PMID:22558122

  19. Screening for sepsis in general hospitalized patients: a systematic review.

    PubMed

    Alberto, L; Marshall, A P; Walker, R; Aitken, L M

    2017-08-01

    Sepsis is a condition widely observed outside critical care areas. To examine the application of sepsis screening tools for early recognition of sepsis in general hospitalized patients to: (i) identify the accuracy of these tools; (ii) determine the outcomes associated with their implementation; and (iii) describe the implementation process. A systematic review method was used. PubMed, CINAHL, Cochrane, Scopus, Web of Science, and Embase databases were systematically searched for primary articles, published from January 1990 to June 2016, that investigated screening tools or alert mechanisms for early identification of sepsis in adult general hospitalized patients. The review protocol was registered with PROSPERO (CRD42016042261). More than 8000 citations were screened for eligibility after duplicates had been removed. Six articles met the inclusion criteria testing two types of sepsis screening tools. Electronic tools can capture, recognize abnormal variables, and activate an alert in real time. However, accuracy of these tools was inconsistent across studies with only one demonstrating high specificity and sensitivity. Paper-based, nurse-led screening tools appear to be more sensitive in the identification of septic patients but were only studied in small samples and particular populations. The process of care measures appears to be enhanced; however, demonstrating improved outcomes is more challenging. Implementation details are rarely reported. Heterogeneity of studies prevented meta-analysis. Clinicians, researchers and health decision-makers should consider these findings and limitations when implementing screening tools, research or policy on sepsis recognition in general hospitalized patients. Copyright © 2017 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  20. Postmortem diagnosis and toxicological validation of illicit substance use

    PubMed Central

    Lehrmann, E; Afanador, ZR; Deep-Soboslay, A; Gallegos, G; Darwin, WD; Lowe, RH; Barnes, AJ; Huestis, MA; Cadet, JL; Herman, MM; Hyde, TM; Kleinman, JE; Freed, WJ

    2008-01-01

    The present study examines the diagnostic challenges of identifying ante-mortem illicit substance use in human postmortem cases. Substance use, assessed by clinical case history reviews, structured next-of-kin interviews, by general toxicology of blood, urine, and/or brain, and by scalp hair testing, identified 33 cocaine, 29 cannabis, 10 phencyclidine and 9 opioid cases. Case history identified 42% cocaine, 76% cannabis, 10% phencyclidine, and 33% opioid cases. Next-of-kin interviews identified almost twice as many cocaine and cannabis cases as Medical Examiner (ME) case histories, and were crucial in establishing a detailed lifetime substance use history. Toxicology identified 91% cocaine, 68% cannabis, 80% phencyclidine, and 100% opioid cases, with hair testing increasing detection for all drug classes. A cocaine or cannabis use history was corroborated by general toxicology with 50% and 32% sensitivity, respectively, and with 82% and 64% sensitivity by hair testing. Hair testing corroborated a positive general toxicology for cocaine and cannabis with 91% and 100% sensitivity, respectively. Case history corroborated hair toxicology with 38% sensitivity for cocaine and 79% sensitivity for cannabis, suggesting that both case history and general toxicology underestimated cocaine use. Identifying ante-mortem substance use in human postmortem cases are key considerations in case diagnosis and for characterization of disorder-specific changes in neurobiology. The sensitivity and specificity of substance use assessments increased when ME case history was supplemented with structured next-of-kin interviews to establish a detailed lifetime substance use history, while comprehensive toxicology, and hair testing in particular, increased detection of recent illicit substance use. PMID:18201295

  1. Characterizing a New Surface-Based Shortwave Cloud Retrieval Technique, Based on Transmitted Radiance for Soil and Vegetated Surface Types

    NASA Technical Reports Server (NTRS)

    Coddington, Odele; Pilewskie, Peter; Schmidt, K. Sebastian; McBride, Patrick J.; Vukicevic, Tomislava

    2013-01-01

    This paper presents an approach using the GEneralized Nonlinear Retrieval Analysis (GENRA) tool and general inverse theory diagnostics including the maximum likelihood solution and the Shannon information content to investigate the performance of a new spectral technique for the retrieval of cloud optical properties from surface based transmittance measurements. The cumulative retrieval information over broad ranges in cloud optical thickness (tau), droplet effective radius (r(sub e)), and overhead sun angles is quantified under two conditions known to impact transmitted radiation; the variability in land surface albedo and atmospheric water vapor content. Our conclusions are: (1) the retrieved cloud properties are more sensitive to the natural variability in land surface albedo than to water vapor content; (2) the new spectral technique is more accurate (but still imprecise) than a standard approach, in particular for tau between 5 and 60 and r(sub e) less than approximately 20 nm; and (3) the retrieved cloud properties are dependent on sun angle for clouds of tau from 5 to 10 and r(sub e) less than 10 nm, with maximum sensitivity obtained for an overhead sun.

  2. Reliability of rapid reporting of cancers in New Hampshire.

    PubMed

    Celaya, Maria O; Riddle, Bruce L; Cherala, Sai S; Armenti, Karla R; Rees, Judy R

    2010-01-01

    The New Hampshire State Cancer Registry (NHSCR) has a 2-phase reporting system. An abbreviated, "rapid" report of cancer diagnosis or treatment is due to the central registry within 45 days of diagnosis and a more detailed, definitive report is due within 180 days. Rapid reports are used for various research studies, but researchers who contact patients are warned that the rapid reports may contain inaccuracies. This study aimed to assess the reliability of rapid cancer reports. For diagnosis years 2000-2004, we compared the rapid and definitive reports submitted to NHSCR. We calculated the sensitivity and positive predictive value of rapid reports; the reliability of key data items overall and for major sites; and the time between diagnosis and submission of the report. Rapid reports identified incident cancer cases with a sensitivity of 88.5%. The overall accuracy of key data items was high. The accuracy of primary sites identified by rapid reports was high generally but lower for ovarian and unknown primaries. A subset analysis showed that 47% of cancers were reported within 90 days of diagnosis. Rapid reports submitted to NHSCR are generally of high quality and present a useful opportunity for research investigations in New Hampshire.

  3. A New High-sensitivity solar X-ray Spectrophotometer SphinX:early operations and databases

    NASA Astrophysics Data System (ADS)

    Gburek, Szymon; Sylwester, Janusz; Kowalinski, Miroslaw; Siarkowski, Marek; Bakala, Jaroslaw; Podgorski, Piotr; Trzebinski, Witold; Plocieniak, Stefan; Kordylewski, Zbigniew; Kuzin, Sergey; Farnik, Frantisek; Reale, Fabio

    The Solar Photometer in X-rays (SphinX) is an instrument operating aboard Russian CORONAS-Photon satellite. A short description of this unique instrument will be presented and its unique capabilities discussed. SphinX is presently the most sensitive solar X-ray spectrophotometer measuring solar spectra in the energy range above 1 keV. A large archive of SphinX mea-surements has already been collected. General access to these measurements is possible. The SphinX data repositories contain lightcurves, spectra, and photon arrival time measurements. The SphinX data cover nearly continuously the period since the satellite launch on January 30, 2009 up to the end-of November 2009. Present instrument status, data formats and data access methods will be shown. An overview of possible new science coming from SphinX data analysis will be discussed.

  4. Low-hazard metallography of moisture-sensitive electrochemical cells.

    PubMed

    Wesolowski, D E; Rodriguez, M A; McKenzie, B B; Papenguth, H W

    2011-08-01

    A low-hazard approach is presented to prepare metallographic cross-sections of moisture-sensitive battery components. The approach is tailored for evaluation of thermal (molten salt) batteries composed of thin pressed-powder pellets, but has general applicability to other battery electrochemistries. Solution-cast polystyrene is used to encapsulate cells before embedding in epoxy. Nonaqueous grinding and polishing are performed in an industrial dry room to increase throughput. Lapping oil is used as a lubricant throughout grinding. Hexane is used as the solvent throughout processing; occupational exposure levels are well below the limits. Light optical and scanning electron microscopy on cross-sections are used to analyse a thermal battery cell. Spatially resolved X-ray diffraction on oblique angle cut cells complement the metallographic analysis. Published 2011. This article is a US Government work and is in the public domain in the USA.

  5. Simulating visibility under reduced acuity and contrast sensitivity.

    PubMed

    Thompson, William B; Legge, Gordon E; Kersten, Daniel J; Shakespeare, Robert A; Lei, Quan

    2017-04-01

    Architects and lighting designers have difficulty designing spaces that are accessible to those with low vision, since the complex nature of most architectural spaces requires a site-specific analysis of the visibility of mobility hazards and key landmarks needed for navigation. We describe a method that can be utilized in the architectural design process for simulating the effects of reduced acuity and contrast on visibility. The key contribution is the development of a way to parameterize the simulation using standard clinical measures of acuity and contrast sensitivity. While these measures are known to be imperfect predictors of visual function, they provide a way of characterizing general levels of visual performance that is familiar to both those working in low vision and our target end-users in the architectural and lighting-design communities. We validate the simulation using a letter-recognition task.

  6. Simulating Visibility Under Reduced Acuity and Contrast Sensitivity

    PubMed Central

    Thompson, William B.; Legge, Gordon E.; Kersten, Daniel J.; Shakespeare, Robert A.; Lei, Quan

    2017-01-01

    Architects and lighting designers have difficulty designing spaces that are accessible to those with low vision, since the complex nature of most architectural spaces requires a site-specific analysis of the visibility of mobility hazards and key landmarks needed for navigation. We describe a method that can be utilized in the architectural design process for simulating the effects of reduced acuity and contrast on visibility. The key contribution is the development of a way to parameterize the simulation using standard clinical measures of acuity and contrast sensitivity. While these measures are known to be imperfect predictors of visual function, they provide a way of characterizing general levels of visual performance that is familiar to both those working in low vision and our target end-users in the architectural and lighting design communities. We validate the simulation using a letter recognition task. PMID:28375328

  7. Typhoon-Induced Ground Deformation

    NASA Astrophysics Data System (ADS)

    Mouyen, M.; Canitano, A.; Chao, B. F.; Hsu, Y.-J.; Steer, P.; Longuevergne, L.; Boy, J.-P.

    2017-11-01

    Geodetic instruments now offer compelling sensitivity, allowing to investigate how solid Earth and surface processes interact. By combining surface air pressure data, nontidal sea level variations model, and rainfall data, we systematically analyze the volumetric deformation of the shallow crust at seven borehole strainmeters in Taiwan induced by 31 tropical cyclones (typhoons) that made landfall to the island from 2004 to 2013. The typhoon's signature consists in a ground dilatation due to air pressure drop, generally followed by a larger ground compression. We show that this compression phase can be mostly explained by the mass loading of rainwater that falls on the ground and concentrates in the valleys towards the strainmeter sensitivity zone. Further, our analysis shows that borehole strainmeters can help quantifying the amount of rainwater accumulating and flowing over a watershed during heavy rainfalls, which is a useful constraint for building hydrological models.

  8. Optic axis determination accuracy for fiber-based polarization-sensitive optical coherence tomography.

    PubMed

    Park, B Hyle; Pierce, Mark C; Cense, Barry; de Boer, Johannes F

    2005-10-01

    We present a generalized analysis of fiber-based polarization-sensitive optical coherence tomography with an emphasis on determination of sample optic axis orientation. The polarization properties of a fiber-based system can cause an overall rotation in a Poincaré sphere representation such that the plane of possible measured sample optic axes for linear birefringence and diattenuation no longer lies in the QU-plane. The optic axis orientation can be recovered as an angle on this rotated plane, subject to an offset and overall indeterminacy in sign such that only the magnitude, but not the direction, of a change in orientation can be determined. We discuss the accuracy of optic axis determination due to a fundamental limit on the accuracy with which a polarization state can be determined as a function of signal-to-noise ratio.

  9. Sensory sensitivity and symptom severity represent unique dimensions of chronic pain: a MAPP Research Network study.

    PubMed

    Schrepf, Andrew; Williams, David A; Gallop, Robert; Naliboff, Bruce; Basu, Neil; Kaplan, Chelsea; Harper, Daniel E; Landis, Richard; Clemens, J Quentin; Strachan, Eric; Griffith, James W; Afari, Niloofar; Hassett, Afton; Pontari, Michel A; Clauw, Daniel J; Harte, Steven E

    2018-05-28

    Chronic Overlapping Pain Conditions (COPCs) are characterized by aberrant central nervous system processing of pain. This 'centralized pain' phenotype has been described using a large and diverse set of symptom domains, including the spatial distribution of pain, pain intensity, fatigue, mood imbalances, cognitive dysfunction, altered somatic sensations, and hypersensitivity to external stimuli. Here we used three cohorts, including patients with Urologic Chronic Pelvic Pain Syndrome (UCPPS), a mixed pain cohort with other COPCs, and healthy individuals (total n = 1039) from the Multidisciplinary Approach to the Study of Chronic Pelvic Pain (MAPP) Research Network to explore the factor structure of symptoms of centralized pain. Using exploratory and confirmatory factor analysis, we identified two general factors in all three cohorts, one characterized by a broad increased sensitivity to internal somatic sensations and environmental stimuli, and diffuse pain, termed Generalized Sensory Sensitivity (GSS), and one characterized by constitutional symptoms - Sleep, Pain, Affect, Cognition, Energy (SPACE). Longitudinal analyses in the UCPPS cohort found the same two factor structure at month six and one year, suggesting that the two factor structure is reproducible over time. In secondary analyses we found that GSS particularly is associated with the presence of comorbid COPCs, while SPACE shows modest associations with measures of disability and urinary symptoms. These factors may represent important and distinct continuum of symptoms that are indicative of the centralized pain phenotype at high levels. Future research of COPCs should accommodate the measurement of each factor.

  10. Sensitivity of lod scores to changes in diagnostic status.

    PubMed Central

    Hodge, S E; Greenberg, D A

    1992-01-01

    This paper investigates effects on lod scores when one individual in a data set changes diagnostic or recombinant status. First we examine the situation in which a single offspring in a nuclear family changes status. The nuclear-family situation, in addition to being of interest in its own right, also has general theoretical importance, since nuclear families are "transparent"; that is, one can track genetic events more precisely in nuclear families than in complex pedigrees. We demonstrate that in nuclear families log10 [(1-theta)/theta] gives an upper limit on the impact that a single offspring's change in status can have on the lod score at that recombination fraction (theta). These limits hold for a fully penetrant dominant condition and fully informative marker, in either phase-known or phase-unknown matings. Moreover, log10 [(1-theta)/theta] (where theta denotes the value of theta at which Zmax occurs) gives an upper limit on the impact of a single offspring's status change on the maximum lod score (Zmax). In extended pedigrees, in contrast to nuclear families, no comparable limit can be set on the impact of a single individual on the lod score. Complex pedigrees are subject to both stabilizing and destabilizing influences, and these are described. Finally, we describe a "sensitivity analysis," in which, after all linkage analysis is completed, every informative individual in the data set is changed, one at a time, to see the effect which each separate change has on the lod scores. The procedure includes identifying "critical individuals," i.e., those who would have the greatest impact on the lod scores, should their diagnostic status in fact change. To illustrate use of the sensitivity analysis, we apply it to the large bipolar pedigree reported by Egeland et al. and Kelsoe et al. We show that the changes in lod scores observed there, on the order of 1.1-1.2 per person, are not unusual. We recommend that investigators include a sensitivity analysis as a standard part of reporting the results of a linkage analysis. PMID:1570835

  11. Sensitivity of lod scores to changes in diagnostic status.

    PubMed

    Hodge, S E; Greenberg, D A

    1992-05-01

    This paper investigates effects on lod scores when one individual in a data set changes diagnostic or recombinant status. First we examine the situation in which a single offspring in a nuclear family changes status. The nuclear-family situation, in addition to being of interest in its own right, also has general theoretical importance, since nuclear families are "transparent"; that is, one can track genetic events more precisely in nuclear families than in complex pedigrees. We demonstrate that in nuclear families log10 [(1-theta)/theta] gives an upper limit on the impact that a single offspring's change in status can have on the lod score at that recombination fraction (theta). These limits hold for a fully penetrant dominant condition and fully informative marker, in either phase-known or phase-unknown matings. Moreover, log10 [(1-theta)/theta] (where theta denotes the value of theta at which Zmax occurs) gives an upper limit on the impact of a single offspring's status change on the maximum lod score (Zmax). In extended pedigrees, in contrast to nuclear families, no comparable limit can be set on the impact of a single individual on the lod score. Complex pedigrees are subject to both stabilizing and destabilizing influences, and these are described. Finally, we describe a "sensitivity analysis," in which, after all linkage analysis is completed, every informative individual in the data set is changed, one at a time, to see the effect which each separate change has on the lod scores. The procedure includes identifying "critical individuals," i.e., those who would have the greatest impact on the lod scores, should their diagnostic status in fact change. To illustrate use of the sensitivity analysis, we apply it to the large bipolar pedigree reported by Egeland et al. and Kelsoe et al. We show that the changes in lod scores observed there, on the order of 1.1-1.2 per person, are not unusual. We recommend that investigators include a sensitivity analysis as a standard part of reporting the results of a linkage analysis.

  12. Reliability and validity of the work and social adjustment scale in phobic disorders.

    PubMed

    Mataix-Cols, David; Cowley, Amy J; Hankins, Matthew; Schneider, Andreas; Bachofen, Martin; Kenwright, Mark; Gega, Lina; Cameron, Rachel; Marks, Isaac M

    2005-01-01

    The Work and Social Adjustment Scale (WSAS) is a simple widely used 5-item measure of disability whose psychometric properties need more analysis in phobic disorders. The reliability, factor structure, validity, and sensitivity to change of the WSAS were studied in 205 phobic patients (73 agoraphobia, 62 social phobia, and 70 specific phobia) who participated in various open and randomized trials of self-exposure therapy. Internal consistency of the WSAS was excellent in all phobics pooled and in agoraphobics and social phobics separately. Principal components analysis extracted a single general factor of disability. Specific phobics gave less consistent ratings across WSAS items, suggesting that some items were less relevant to their problem. Internal consistency was marginally higher for self-ratings than clinician ratings of the WSAS. Self-ratings and clinician ratings correlated highly though patients tended to rate themselves as more disabled than clinicians did. WSAS total scores reflected differences in phobic severity and improvement with treatment. The WSAS is a valid, reliable, and change-sensitive measure of work/social and other adjustment in phobic disorders, especially in agoraphobia and social phobia.

  13. Novel approach for the refractive index gradient measurement in microliter volumes using fiber-optic technology

    NASA Astrophysics Data System (ADS)

    Synovec, Robert E.; Renn, Curtiss N.

    1991-07-01

    The refractive index gradient (RIG) of hydrodynamically controlled profiles can be universally, yet sensitively, measured by carefully probing the radial RIG passing through a z-configuration flow cell. Fiber optic technology is applied in order to provide a narrow, collimated probe beam (100 micrometers diameter) that is deflected by a RIG and measured by a position sensitive detector. The fiber optic construction allows one to probe very small volumes (1 (mu) L to 3 (mu) L) amenable to microbore liquid chromatography ((mu) LC). The combination of (mu) LC and RIG detection is very useful for the analysis of trace quantities (ng injected amounts) of chemical species that are generally difficult to measure, i.e., species that are not amenable to absorbance detection or related techniques. Furthermore, the RIG detector is compatible with conventional mobile phase gradient and thermal gradient (mu) LC, unlike traditional RI detectors. A description of the RIG detector coupled with (mu) LC for the analysis of complex polymer samples is reported. Also, exploration into using the RIG detector for supercritical fluid chromatography is addressed.

  14. A Sensitivity Analysis of fMRI Balloon Model.

    PubMed

    Zayane, Chadia; Laleg-Kirati, Taous Meriem

    2015-01-01

    Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.

  15. An analysis of the transit times of TrES-1b

    NASA Astrophysics Data System (ADS)

    Steffen, Jason H.; Agol, Eric

    2005-11-01

    The presence of a second planet in a known, transiting-planet system will cause the time between transits to vary. These variations can be used to constrain the orbital elements and mass of the perturbing planet. We analyse the set of transit times of the TrES-1 system given in Charbonneau et al. We find no convincing evidence for a second planet in the TrES-1 system from those data. By further analysis, we constrain the mass that a perturbing planet could have as a function of the semi-major axis ratio of the two planets and the eccentricity of the perturbing planet. Near low-order, mean-motion resonances (within ~1 per cent fractional deviation), we find that a secondary planet must generally have a mass comparable to or less than the mass of the Earth - showing that these data are the first to have sensitivity to sub-Earth-mass planets. We compare the sensitivity of this technique to the mass of the perturbing planet with future, high-precision radial velocity measurements.

  16. Rapid Diagnostic Test Performance Assessed Using Latent Class Analysis for the Diagnosis of Plasmodium falciparum Placental Malaria.

    PubMed

    Liu, Yunhao; Mwapasa, Victor; Khairallah, Carole; Thwai, Kyaw L; Kalilani-Phiri, Linda; Ter Kuile, Feiko O; Meshnick, Steven R; Taylor, Steve M

    2016-10-05

    Placental malaria causes low birth weight and neonatal mortality in malaria-endemic areas. The diagnosis of placental malaria is important for program evaluation and clinical care, but is compromised by the suboptimal performance of current diagnostics. Using placental and peripheral blood specimens collected from delivering women in Malawi, we compared estimation of the operating characteristics of microscopy, rapid diagnostic test (RDT), polymerase chain reaction, and histopathology using both a traditional contingency table and a latent class analysis (LCA) approach. The prevalence of placental malaria by histopathology was 13.8%; concordance between tests was generally poor. Relative to histopathology, RDT sensitivity was 79.5% in peripheral and 66.2% in placental blood; using LCA, RDT sensitivities increased to 93.7% and 80.2%, respectively. Our results, if replicated in other cohorts, indicate that RDT testing of peripheral or placental blood may be suitable approaches to detect placental malaria for surveillance programs, including areas where intermittent preventive therapy in pregnancy is not used. © The American Society of Tropical Medicine and Hygiene.

  17. Lorentz-Symmetry Test at Planck-Scale Suppression With a Spin-Polarized 133Cs Cold Atom Clock.

    PubMed

    Pihan-Le Bars, H; Guerlin, C; Lasseri, R-D; Ebran, J-P; Bailey, Q G; Bize, S; Khan, E; Wolf, P

    2018-06-01

    We present the results of a local Lorentz invariance (LLI) test performed with the 133 Cs cold atom clock FO2, hosted at SYRTE. Such a test, relating the frequency shift between 133 Cs hyperfine Zeeman substates with the Lorentz violating coefficients of the standard model extension (SME), has already been realized by Wolf et al. and led to state-of-the-art constraints on several SME proton coefficients. In this second analysis, we used an improved model, based on a second-order Lorentz transformation and a self-consistent relativistic mean field nuclear model, which enables us to extend the scope of the analysis from purely proton to both proton and neutron coefficients. We have also become sensitive to the isotropic coefficient , another SME coefficient that was not constrained by Wolf et al. The resulting limits on SME coefficients improve by up to 13 orders of magnitude the present maximal sensitivities for laboratory tests and reach the generally expected suppression scales at which signatures of Lorentz violation could appear.

  18. 40 CFR 766.18 - Method sensitivity.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 32 2012-07-01 2012-07-01 false Method sensitivity. 766.18 Section 766.18 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT DIBENZO-PARA-DIOXINS/DIBENZOFURANS General Provisions § 766.18 Method sensitivity. The target level of...

  19. 40 CFR 766.18 - Method sensitivity.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Method sensitivity. 766.18 Section 766.18 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT DIBENZO-PARA-DIOXINS/DIBENZOFURANS General Provisions § 766.18 Method sensitivity. The target level of...

  20. 40 CFR 766.18 - Method sensitivity.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Method sensitivity. 766.18 Section 766.18 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT DIBENZO-PARA-DIOXINS/DIBENZOFURANS General Provisions § 766.18 Method sensitivity. The target level of...

Top