Sample records for applied sensitivity analysis

  1. Estimating Sobol Sensitivity Indices Using Correlations

    EPA Science Inventory

    Sensitivity analysis is a crucial tool in the development and evaluation of complex mathematical models. Sobol's method is a variance-based global sensitivity analysis technique that has been applied to computational models to assess the relative importance of input parameters on...

  2. Acceleration and sensitivity analysis of lattice kinetic Monte Carlo simulations using parallel processing and rate constant rescaling

    NASA Astrophysics Data System (ADS)

    Núñez, M.; Robie, T.; Vlachos, D. G.

    2017-10-01

    Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).

  3. To what degree does the missing-data technique influence the estimated growth in learning strategies over time? A tutorial example of sensitivity analysis for longitudinal data.

    PubMed

    Coertjens, Liesje; Donche, Vincent; De Maeyer, Sven; Vanthournout, Gert; Van Petegem, Peter

    2017-01-01

    Longitudinal data is almost always burdened with missing data. However, in educational and psychological research, there is a large discrepancy between methodological suggestions and research practice. The former suggests applying sensitivity analysis in order to the robustness of the results in terms of varying assumptions regarding the mechanism generating the missing data. However, in research practice, participants with missing data are usually discarded by relying on listwise deletion. To help bridge the gap between methodological recommendations and applied research in the educational and psychological domain, this study provides a tutorial example of sensitivity analysis for latent growth analysis. The example data concern students' changes in learning strategies during higher education. One cohort of students in a Belgian university college was asked to complete the Inventory of Learning Styles-Short Version, in three measurement waves. A substantial number of students did not participate on each occasion. Change over time in student learning strategies was assessed using eight missing data techniques, which assume different mechanisms for missingness. The results indicated that, for some learning strategy subscales, growth estimates differed between the models. Guidelines in terms of reporting the results from sensitivity analysis are synthesised and applied to the results from the tutorial example.

  4. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  5. Application of advanced multidisciplinary analysis and optimization methods to vehicle design synthesis

    NASA Technical Reports Server (NTRS)

    Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw

    1990-01-01

    Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.

  6. Adjoint Sensitivity Analysis of Radiative Transfer Equation: Temperature and Gas Mixing Ratio Weighting Functions for Remote Sensing of Scattering Atmospheres in Thermal IR

    NASA Technical Reports Server (NTRS)

    Ustinov, E.

    1999-01-01

    Sensitivity analysis based on using of the adjoint equation of radiative transfer is applied to the case of atmospheric remote sensing in the thermal spectral region with non-negligeable atmospheric scattering.

  7. The application of sensitivity analysis to models of large scale physiological systems

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  8. Predictive Uncertainty And Parameter Sensitivity Of A Sediment-Flux Model: Nitrogen Flux and Sediment Oxygen Demand

    EPA Science Inventory

    Estimating model predictive uncertainty is imperative to informed environmental decision making and management of water resources. This paper applies the Generalized Sensitivity Analysis (GSA) to examine parameter sensitivity and the Generalized Likelihood Uncertainty Estimation...

  9. Polarization sensitive spectroscopic optical coherence tomography for multimodal imaging

    NASA Astrophysics Data System (ADS)

    Strąkowski, Marcin R.; Kraszewski, Maciej; Strąkowska, Paulina; Trojanowski, Michał

    2015-03-01

    Optical coherence tomography (OCT) is a non-invasive method for 3D and cross-sectional imaging of biological and non-biological objects. The OCT measurements are provided in non-contact and absolutely safe way for the tested sample. Nowadays, the OCT is widely applied in medical diagnosis especially in ophthalmology, as well as dermatology, oncology and many more. Despite of great progress in OCT measurements there are still a vast number of issues like tissue recognition or imaging contrast enhancement that have not been solved yet. Here we are going to present the polarization sensitive spectroscopic OCT system (PS-SOCT). The PS-SOCT combines the polarization sensitive analysis with time-frequency analysis. Unlike standard polarization sensitive OCT the PS-SOCT delivers spectral information about measured quantities e.g. tested object birefringence changes over the light spectra. This solution overcomes the limits of polarization sensitive analysis applied in standard PS-OCT. Based on spectral data obtained from PS-SOCT the exact value of birefringence can be calculated even for the objects that provide higher order of retardation. In this contribution the benefits of using the combination of time-frequency and polarization sensitive analysis are being expressed. Moreover, the PS-SOCT system features, as well as OCT measurement examples are presented.

  10. Application of neural networks and sensitivity analysis to improved prediction of trauma survival.

    PubMed

    Hunter, A; Kennedy, L; Henry, J; Ferguson, I

    2000-05-01

    The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.

  11. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    NASA Astrophysics Data System (ADS)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  12. A Multi-Objective Decision-Making Model for Resources Allocation in Humanitarian Relief

    DTIC Science & Technology

    2007-03-01

    Applied Mathematics and Computation 163, 2005, pp756 19. Malczewski, J., GIS and Multicriteria Decision Analysis , John Wiley and Sons, New York... used when interpreting the results of the analysis . (Raimo et al. 2002) (7) Sensitivity analysis Sensitivity analysis in a DA process answers...Budget Scenario Analysis The MILP is solved ( using LINDO 6.1) for high, medium and low budget scenarios in both damage degree levels. Tables 17 and

  13. Linked Sensitivity Analysis, Calibration, and Uncertainty Analysis Using a System Dynamics Model for Stroke Comparative Effectiveness Research.

    PubMed

    Tian, Yuan; Hassmiller Lich, Kristen; Osgood, Nathaniel D; Eom, Kirsten; Matchar, David B

    2016-11-01

    As health services researchers and decision makers tackle more difficult problems using simulation models, the number of parameters and the corresponding degree of uncertainty have increased. This often results in reduced confidence in such complex models to guide decision making. To demonstrate a systematic approach of linked sensitivity analysis, calibration, and uncertainty analysis to improve confidence in complex models. Four techniques were integrated and applied to a System Dynamics stroke model of US veterans, which was developed to inform systemwide intervention and research planning: Morris method (sensitivity analysis), multistart Powell hill-climbing algorithm and generalized likelihood uncertainty estimation (calibration), and Monte Carlo simulation (uncertainty analysis). Of 60 uncertain parameters, sensitivity analysis identified 29 needing calibration, 7 that did not need calibration but significantly influenced key stroke outcomes, and 24 not influential to calibration or stroke outcomes that were fixed at their best guess values. One thousand alternative well-calibrated baselines were obtained to reflect calibration uncertainty and brought into uncertainty analysis. The initial stroke incidence rate among veterans was identified as the most influential uncertain parameter, for which further data should be collected. That said, accounting for current uncertainty, the analysis of 15 distinct prevention and treatment interventions provided a robust conclusion that hypertension control for all veterans would yield the largest gain in quality-adjusted life years. For complex health care models, a mixed approach was applied to examine the uncertainty surrounding key stroke outcomes and the robustness of conclusions. We demonstrate that this rigorous approach can be practical and advocate for such analysis to promote understanding of the limits of certainty in applying models to current decisions and to guide future data collection. © The Author(s) 2016.

  14. A Quad-Cantilevered Plate micro-sensor for intracranial pressure measurement.

    PubMed

    Lalkov, Vasko; Qasaimeh, Mohammad A

    2017-07-01

    This paper proposes a new design for pressure-sensing micro-plate platform to bring higher sensitivity to a pressure sensor based on piezoresistive MEMS sensing mechanism. The proposed design is composed of a suspended plate having four stepped cantilever beams connected to its corners, and thus defined as Quad-Cantilevered Plate (QCP). Finite element analysis was performed to determine the optimal design for sensitivity and structural stability under a range of applied forces. Furthermore, a piezoresistive analysis was performed to calculate sensor sensitivity. Both the maximum stress and the change in resistance of the piezoresistor associated with the QCP were found to be higher compared to previously published designs, and linearly related to the applied pressure as desired. Therefore, the QCP demonstrates greater sensitivity, and could be potentially used as an efficient pressure sensor for intracranial pressure measurement.

  15. Failure Bounding And Sensitivity Analysis Applied To Monte Carlo Entry, Descent, And Landing Simulations

    NASA Technical Reports Server (NTRS)

    Gaebler, John A.; Tolson, Robert H.

    2010-01-01

    In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.

  16. Modelling ecological and human exposure to POPs in Venice lagoon - Part II: Quantitative uncertainty and sensitivity analysis in coupled exposure models.

    PubMed

    Radomyski, Artur; Giubilato, Elisa; Ciffroy, Philippe; Critto, Andrea; Brochot, Céline; Marcomini, Antonio

    2016-11-01

    The study is focused on applying uncertainty and sensitivity analysis to support the application and evaluation of large exposure models where a significant number of parameters and complex exposure scenarios might be involved. The recently developed MERLIN-Expo exposure modelling tool was applied to probabilistically assess the ecological and human exposure to PCB 126 and 2,3,7,8-TCDD in the Venice lagoon (Italy). The 'Phytoplankton', 'Aquatic Invertebrate', 'Fish', 'Human intake' and PBPK models available in MERLIN-Expo library were integrated to create a specific food web to dynamically simulate bioaccumulation in various aquatic species and in the human body over individual lifetimes from 1932 until 1998. MERLIN-Expo is a high tier exposure modelling tool allowing propagation of uncertainty on the model predictions through Monte Carlo simulation. Uncertainty in model output can be further apportioned between parameters by applying built-in sensitivity analysis tools. In this study, uncertainty has been extensively addressed in the distribution functions to describe the data input and the effect on model results by applying sensitivity analysis techniques (screening Morris method, regression analysis, and variance-based method EFAST). In the exposure scenario developed for the Lagoon of Venice, the concentrations of 2,3,7,8-TCDD and PCB 126 in human blood turned out to be mainly influenced by a combination of parameters (half-lives of the chemicals, body weight variability, lipid fraction, food assimilation efficiency), physiological processes (uptake/elimination rates), environmental exposure concentrations (sediment, water, food) and eating behaviours (amount of food eaten). In conclusion, this case study demonstrated feasibility of MERLIN-Expo to be successfully employed in integrated, high tier exposure assessment. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Thermodynamic modeling of transcription: sensitivity analysis differentiates biological mechanism from mathematical model-induced effects.

    PubMed

    Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet

    2010-10-24

    Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary context to determine how modeling results should be interpreted in biological systems.

  18. System Sensitivity Analysis Applied to the Conceptual Design of a Dual-Fuel Rocket SSTO

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1994-01-01

    This paper reports the results of initial efforts to apply the System Sensitivity Analysis (SSA) optimization method to the conceptual design of a single-stage-to-orbit (SSTO) launch vehicle. SSA is an efficient, calculus-based MDO technique for generating sensitivity derivatives in a highly multidisciplinary design environment. The method has been successfully applied to conceptual aircraft design and has been proven to have advantages over traditional direct optimization methods. The method is applied to the optimization of an advanced, piloted SSTO design similar to vehicles currently being analyzed by NASA as possible replacements for the Space Shuttle. Powered by a derivative of the Russian RD-701 rocket engine, the vehicle employs a combination of hydrocarbon, hydrogen, and oxygen propellants. Three primary disciplines are included in the design - propulsion, performance, and weights & sizing. A complete, converged vehicle analysis depends on the use of three standalone conceptual analysis computer codes. Efforts to minimize vehicle dry (empty) weight are reported in this paper. The problem consists of six system-level design variables and one system-level constraint. Using SSA in a 'manual' fashion to generate gradient information, six system-level iterations were performed from each of two different starting points. The results showed a good pattern of convergence for both starting points. A discussion of the advantages and disadvantages of the method, possible areas of improvement, and future work is included.

  19. Sensitivity analysis of dynamic biological systems with time-delays.

    PubMed

    Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang

    2010-10-15

    Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex biological systems with time-delays.

  20. Error analysis applied to several inversion techniques used for the retrieval of middle atmospheric constituents from limb-scanning MM-wave spectroscopic measurements

    NASA Technical Reports Server (NTRS)

    Puliafito, E.; Bevilacqua, R.; Olivero, J.; Degenhardt, W.

    1992-01-01

    The formal retrieval error analysis of Rodgers (1990) allows the quantitative determination of such retrieval properties as measurement error sensitivity, resolution, and inversion bias. This technique was applied to five numerical inversion techniques and two nonlinear iterative techniques used for the retrieval of middle atmospheric constituent concentrations from limb-scanning millimeter-wave spectroscopic measurements. It is found that the iterative methods have better vertical resolution, but are slightly more sensitive to measurement error than constrained matrix methods. The iterative methods converge to the exact solution, whereas two of the matrix methods under consideration have an explicit constraint, the sensitivity of the solution to the a priori profile. Tradeoffs of these retrieval characteristics are presented.

  1. First- and Second-Order Sensitivity Analysis of a P-Version Finite Element Equation Via Automatic Differentiation

    NASA Technical Reports Server (NTRS)

    Hou, Gene

    1998-01-01

    Sensitivity analysis is a technique for determining derivatives of system responses with respect to design parameters. Among many methods available for sensitivity analysis, automatic differentiation has been proven through many applications in fluid dynamics and structural mechanics to be an accurate and easy method for obtaining derivatives. Nevertheless, the method can be computational expensive and can require a high memory space. This project will apply an automatic differentiation tool, ADIFOR, to a p-version finite element code to obtain first- and second- order then-nal derivatives, respectively. The focus of the study is on the implementation process and the performance of the ADIFOR-enhanced codes for sensitivity analysis in terms of memory requirement, computational efficiency, and accuracy.

  2. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  3. Rapid, sensitive and direct analysis of exopolysaccharides from biofilm on aluminum surfaces exposed to sea water using MALDI-TOF MS.

    PubMed

    Hasan, Nazim; Gopal, Judy; Wu, Hui-Fen

    2011-11-01

    Biofilm studies have extensive significance since their results can provide insights into the behavior of bacteria on material surfaces when exposed to natural water. This is the first attempt of using matrix-assisted laser desorption/ionization-mass spectrometry (MALDI-MS) for detecting the polysaccharides formed in a complex biofilm consisting of a mixed consortium of marine microbes. MALDI-MS has been applied to directly analyze exopolysaccharides (EPS) in the biofilm formed on aluminum surfaces exposed to seawater. The optimal conditions for MALDI-MS applied to EPS analysis of biofilm have been described. In addition, microbiologically influenced corrosion of aluminum exposed to sea water by a marine fungus was also observed and the fungus identity established using MALDI-MS analysis of EPS. Rapid, sensitive and direct MALDI-MS analysis on biofilm would dramatically speed up and provide new insights into biofilm studies due to its excellent advantages such as simplicity, high sensitivity, high selectivity and high speed. This study introduces a novel, fast, sensitive and selective platform for biofilm study from natural water without the need of tedious culturing steps or complicated sample pretreatment procedures. Copyright © 2011 John Wiley & Sons, Ltd.

  4. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    NASA Technical Reports Server (NTRS)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  5. Probing 6D operators at future e - e + colliders

    NASA Astrophysics Data System (ADS)

    Chiu, Wen Han; Leung, Sze Ching; Liu, Tao; Lyu, Kun-Feng; Wang, Lian-Tao

    2018-05-01

    We explore the sensitivities at future e - e + colliders to probe a set of six-dimensional operators which can modify the SM predictions on Higgs physics and electroweak precision measurements. We consider the case in which the operators are turned on simultaneously. Such an analysis yields a "conservative" interpretation on the collider sensitivities, complementary to the "optimistic" scenario where the operators are individually probed. After a detail analysis at CEPC in both "conservative" and "optimistic" scenarios, we also considered the sensitivities for FCC-ee and ILC. As an illustration of the potential of constraining new physics models, we applied sensitivity analysis to two benchmarks: holographic composite Higgs model and littlest Higgs model.

  6. Simulation-based sensitivity analysis for non-ignorably missing data.

    PubMed

    Yin, Peng; Shi, Jian Q

    2017-01-01

    Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.

  7. Analysis of Sensitivity Experiments - An Expanded Primer

    DTIC Science & Technology

    2017-03-08

    diehard practitioners. The difficulty associated with mastering statistical inference presents a true dilemma. Statistics is an extremely applied...lost, perhaps forever. In other words, when on this safari, you need a guide. This report is designed to be a guide, of sorts. It focuses on analytical...estimated accurately if our analysis is to have real meaning. For this reason, the sensitivity test procedure is designed to concentrate measurements

  8. Development of a sensitivity analysis technique for multiloop flight control systems

    NASA Technical Reports Server (NTRS)

    Vaillard, A. H.; Paduano, J.; Downing, D. R.

    1985-01-01

    This report presents the development and application of a sensitivity analysis technique for multiloop flight control systems. This analysis yields very useful information on the sensitivity of the relative-stability criteria of the control system, with variations or uncertainties in the system and controller elements. The sensitivity analysis technique developed is based on the computation of the singular values and singular-value gradients of a feedback-control system. The method is applicable to single-input/single-output as well as multiloop continuous-control systems. Application to sampled-data systems is also explored. The sensitivity analysis technique was applied to a continuous yaw/roll damper stability augmentation system of a typical business jet, and the results show that the analysis is very useful in determining the system elements which have the largest effect on the relative stability of the closed-loop system. As a secondary product of the research reported here, the relative stability criteria based on the concept of singular values were explored.

  9. Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems With Switching [Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems

    DOE PAGES

    Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil; ...

    2017-01-24

    Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less

  10. Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems With Switching [Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil

    Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less

  11. Boosting Sensitivity in Liquid Chromatography–Fourier Transform Ion Cyclotron Resonance–Tandem Mass Spectrometry for Product Ion Analysis of Monoterpene Indole Alkaloids

    PubMed Central

    Nakabayashi, Ryo; Tsugawa, Hiroshi; Kitajima, Mariko; Takayama, Hiromitsu; Saito, Kazuki

    2015-01-01

    In metabolomics, the analysis of product ions in tandem mass spectrometry (MS/MS) is noteworthy to chemically assign structural information. However, the development of relevant analytical methods are less advanced. Here, we developed a method to boost sensitivity in liquid chromatography–Fourier transform ion cyclotron resonance–tandem mass spectrometry analysis (MS/MS boost analysis). To verify the MS/MS boost analysis, both quercetin and uniformly labeled 13C quercetin were analyzed, revealing that the origin of the product ions is not the instrument, but the analyzed compounds resulting in sensitive product ions. Next, we applied this method to the analysis of monoterpene indole alkaloids (MIAs). The comparative analyses of MIAs having indole basic skeleton (ajmalicine, catharanthine, hirsuteine, and hirsutine) and oxindole skeleton (formosanine, isoformosanine, pteropodine, isopteropodine, rhynchophylline, isorhynchophylline, and mitraphylline) identified 86 and 73 common monoisotopic ions, respectively. The comparative analyses of the three pairs of stereoisomers showed more than 170 common monoisotopic ions in each pair. This method was also applied to the targeted analysis of MIAs in Catharanthus roseus and Uncaria rhynchophylla to profile indole and oxindole compounds using the product ions. This analysis is suitable for chemically assigning features of the metabolite groups, which contributes to targeted metabolome analysis. PMID:26734034

  12. Applying geologic sensitivity analysis to environmental risk management: The financial implications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, D.T.

    The financial risks associated with environmental contamination can be staggering and are often difficult to identify and accurately assess. Geologic sensitivity analysis is gaining recognition as a significant and useful tool that can empower the user with crucial information concerning environmental risk management and brownfield redevelopment. It is particularly useful when (1) evaluating the potential risks associated with redevelopment of historical industrial facilities (brownfields) and (2) planning for future development, especially in areas of rapid development because the number of potential contaminating sources often increases with an increase in economic development. An examination of the financial implications relating to geologicmore » sensitivity analysis in southeastern Michigan from numerous case studies indicate that the environmental cost of contamination may be 100 to 1,000 times greater at a geologically sensitive location compared to the least sensitive location. Geologic sensitivity analysis has demonstrated that near-surface geology may influence the environmental impact of a contaminated site to a greater extent than the amount and type of industrial development.« less

  13. Sensitivity analysis, calibration, and testing of a distributed hydrological model using error‐based weighting and one objective function

    USGS Publications Warehouse

    Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.

    2009-01-01

    We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.

  14. Sensitivity of VIIRS Polarization Measurements

    NASA Technical Reports Server (NTRS)

    Waluschka, Eugene

    2010-01-01

    The design of an optical system typically involves a sensitivity analysis where the various lens parameters, such as lens spacing and curvatures, to name two parameters, are (slightly) varied to see what, if any, effect this has on the performance and to establish manufacturing tolerances. A sinular analysis was performed for the VIIRS instruments polarization measurements to see how real world departures from perfectly linearly polarized light entering VIIRS effects the polarization measurement. The methodology and a few of the results of this polarization sensitivity analysis are presented and applied to the construction of a single polarizer which will cover the VIIRS VIS/NIR spectral range. Keywords: VIIRS, polarization, ray, trace; polarizers, Bolder Vision, MOXTEK

  15. Scaling in sensitivity analysis

    USGS Publications Warehouse

    Link, W.A.; Doherty, P.F.

    2002-01-01

    Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.

  16. Applying an intelligent model and sensitivity analysis to inspect mass transfer kinetics, shrinkage and crust color changes of deep-fat fried ostrich meat cubes.

    PubMed

    Amiryousefi, Mohammad Reza; Mohebbi, Mohebbat; Khodaiyan, Faramarz

    2014-01-01

    The objectives of this study were to use image analysis and artificial neural network (ANN) to predict mass transfer kinetics as well as color changes and shrinkage of deep-fat fried ostrich meat cubes. Two generalized feedforward networks were separately developed by using the operation conditions as inputs. Results based on the highest numerical quantities of the correlation coefficients between the experimental versus predicted values, showed proper fitting. Sensitivity analysis results of selected ANNs showed that among the input variables, frying temperature was the most sensitive to moisture content (MC) and fat content (FC) compared to other variables. Sensitivity analysis results of selected ANNs showed that MC and FC were the most sensitive to frying temperature compared to other input variables. Similarly, for the second ANN architecture, microwave power density was the most impressive variable having the maximum influence on both shrinkage percentage and color changes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China

    NASA Astrophysics Data System (ADS)

    Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.

    2016-12-01

    Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.

  18. High-sensitivity ESCA instrument

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davies, R.D.; Herglotz, H.K.; Lee, J.D.

    1973-01-01

    A new electron spectroscopy for chemical analysis (ESCA) instrument has been developed to provide high sensitivity and efficient operation for laboratory analysis of composition and chemical bonding in very thin surface layers of solid samples. High sensitivity is achieved by means of the high-intensity, efficient x-ray source described by Davies and Herglotz at the 1968 Denver X-Ray Conference, in combination with the new electron energy analyzer described by Lee at the 1972 Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy. A sample chamber designed to provide for rapid introduction and replacement of samples has adequate facilities for various sample treatmentsmore » and conditiouing followed immediately by ESCA analysis of the sample. Examples of application are presented, demonstrating the sensitivity and resolution achievable with this instrument. Its usefulness in trace surface analysis is shown and some chemical shifts'' measured by the instrument are compared with those obtained by x-ray spectroscopy. (auth)« less

  19. Applying Monte-Carlo simulations to optimize an inelastic neutron scattering system for soil carbon analysis

    USDA-ARS?s Scientific Manuscript database

    Computer Monte-Carlo (MC) simulations (Geant4) of neutron propagation and acquisition of gamma response from soil samples was applied to evaluate INS system performance characteristic [sensitivity, minimal detectable level (MDL)] for soil carbon measurement. The INS system model with best performanc...

  20. Development and application of optimum sensitivity analysis of structures

    NASA Technical Reports Server (NTRS)

    Barthelemy, J. F. M.; Hallauer, W. L., Jr.

    1984-01-01

    The research focused on developing an algorithm applying optimum sensitivity analysis for multilevel optimization. The research efforts have been devoted to assisting NASA Langley's Interdisciplinary Research Office (IRO) in the development of a mature methodology for a multilevel approach to the design of complex (large and multidisciplinary) engineering systems. An effort was undertaken to identify promising multilevel optimization algorithms. In the current reporting period, the computer program generating baseline single level solutions was completed and tested out.

  1. Hangmen and Associations: The Final Analysis.

    ERIC Educational Resources Information Center

    Garner, Mark; Newsome, Bernard

    1979-01-01

    Applies Ferdinand de Saussure's linguistic theories on the construction of a text to the literary analysis of texts. Recounts the use of this derivation in a literature class, showing that sensitivity to student experiences facilitates their understanding and appreciation of literary works. (RL)

  2. Applying Recursive Sensitivity Analysis to Multi-Criteria Decision Models to Reduce Bias in Defense Cyber Engineering Analysis

    DTIC Science & Technology

    2015-10-28

    techniques such as regression analysis, correlation, and multicollinearity assessment to identify the change and error on the input to the model...between many of the independent or predictor variables, the issue of multicollinearity may arise [18]. VII. SUMMARY Accurate decisions concerning

  3. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  4. Diagnostic value of highly-sensitive chimerism analysis after allogeneic stem cell transplantation.

    PubMed

    Sellmann, Lea; Rabe, Kim; Bünting, Ivonne; Dammann, Elke; Göhring, Gudrun; Ganser, Arnold; Stadler, Michael; Weissinger, Eva M; Hambach, Lothar

    2018-05-02

    Conventional analysis of host chimerism (HC) frequently fails to detect relapse before its clinical manifestation in patients with hematological malignancies after allogeneic stem cell transplantation (allo-SCT). Quantitative PCR (qPCR)-based highly-sensitive chimerism analysis extends the detection limit of conventional (short tandem repeats-based) chimerism analysis from 1 to 0.01% host cells in whole blood. To date, the diagnostic value of highly-sensitive chimerism analysis is hardly defined. Here, we applied qPCR-based chimerism analysis to 901 blood samples of 71 out-patients with hematological malignancies after allo-SCT. Receiver operating characteristics (ROC) curves were calculated for absolute HC values and for the increments of HC before relapse. Using the best cut-offs, relapse was detected with sensitivities of 74 or 85% and specificities of 69 or 75%, respectively. Positive predictive values (PPVs) were only 12 or 18%, but the respective negative predictive values were 98 or 99%. Relapse was detected median 38 or 45 days prior to clinical diagnosis, respectively. Considering also durations of steadily increasing HC of more than 28 days improved PPVs to more than 28 or 59%, respectively. Overall, highly-sensitive chimerism analysis excludes relapses with high certainty and predicts relapses with high sensitivity and specificity more than a month prior to clinical diagnosis.

  5. Application of flowerlike MgO for highly sensitive determination of lead via matrix-assisted laser desorption/ionization mass spectrometry.

    PubMed

    Hou, Jian; Chen, Suming; Cao, Changyan; Liu, Huihui; Xiong, Caiqiao; Zhang, Ning; He, Qing; Song, Weiguo; Nie, Zongxiu

    2016-08-01

    Matrix-assisted laser desorption/ionization mass spectrometry (MALDI MS) is a high-throughput method to achieve fast and accurate identification of lead (Pb) exposure, but is seldom used because of low ionization efficiency and insufficient sensitivity. Nanomaterials applied in MS are a promising technique to overcome the obstacles of MALDI. Flowerlike MgO nanostructures are applied for highly sensitive lead profiling in real samples. They can be used in two ways: (a) MgO is mixed with N-naphthylethylenediamine dihydrochloride (NEDC) as a novel matrix MgO/NEDC; (b) MgO is applied as an absorbent to enrich Pb ions in very dilute solution. The signal intensities of lead by MgO/NEDC were ten times higher than the NEDC matrix. It also shows superior anti-interference ability when analyzing 10 μmol/L Pb ions in the presence of organic substances or interfering metal ions. By applying MgO as adsorbent, the LOD of lead before enrichment is 1 nmol/L. Blood lead test can be achieved using this enrichment process. Besides, MgO can play the role of internal standard to achieve quantitative analysis. Flowerlike MgO nanostructures were applied for highly sensitive lead profiling in real samples. The method is helpful to prevent Pb contamination in a wide range. Further, the combination of MgO with MALDI MS could inspire more nanomaterials being applied in highly sensitive profiling of pollutants. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Sensitivity-Based Guided Model Calibration

    NASA Astrophysics Data System (ADS)

    Semnani, M.; Asadzadeh, M.

    2017-12-01

    A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.

  7. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process.

    PubMed

    Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-31

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.

  8. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process

    PubMed Central

    Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-01

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048

  9. A comparison of analysis methods to estimate contingency strength.

    PubMed

    Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T

    2018-05-09

    To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.

  10. Adjoint sensitivity analysis of a tumor growth model and its application to spatiotemporal radiotherapy optimization.

    PubMed

    Fujarewicz, Krzysztof; Lakomiec, Krzysztof

    2016-12-01

    We investigate a spatial model of growth of a tumor and its sensitivity to radiotherapy. It is assumed that the radiation dose may vary in time and space, like in intensity modulated radiotherapy (IMRT). The change of the final state of the tumor depends on local differences in the radiation dose and varies with the time and the place of these local changes. This leads to the concept of a tumor's spatiotemporal sensitivity to radiation, which is a function of time and space. We show how adjoint sensitivity analysis may be applied to calculate the spatiotemporal sensitivity of the finite difference scheme resulting from the partial differential equation describing the tumor growth. We demonstrate results of this approach to the tumor proliferation, invasion and response to radiotherapy (PIRT) model and we compare the accuracy and the computational effort of the method to the simple forward finite difference sensitivity analysis. Furthermore, we use the spatiotemporal sensitivity during the gradient-based optimization of the spatiotemporal radiation protocol and present results for different parameters of the model.

  11. Efficient sensitivity analysis method for chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao

    2016-05-01

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.

  12. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  13. A practical guide to propensity score analysis for applied clinical research.

    PubMed

    Lee, Jaehoon; Little, Todd D

    2017-11-01

    Observational studies are often the only viable options in many clinical settings, especially when it is unethical or infeasible to randomly assign participants to different treatment régimes. In such case propensity score (PS) analysis can be applied to accounting for possible selection bias and thereby addressing questions of causal inference. Many PS methods exist, yet few guidelines are available to aid applied researchers in their conduct and evaluation of a PS analysis. In this article we give an overview of available techniques for PS estimation and application, balance diagnostic, treatment effect estimation, and sensitivity assessment, as well as recent advances. We also offer a tutorial that can be used to emulate the steps of PS analysis. Our goal is to provide information that will bring PS analysis within the reach of applied clinical researchers and practitioners. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Sensitivity analysis of hydrodynamic stability operators

    NASA Technical Reports Server (NTRS)

    Schmid, Peter J.; Henningson, Dan S.; Khorrami, Mehdi R.; Malik, Mujeeb R.

    1992-01-01

    The eigenvalue sensitivity for hydrodynamic stability operators is investigated. Classical matrix perturbation techniques as well as the concept of epsilon-pseudoeigenvalues are applied to show that parts of the spectrum are highly sensitive to small perturbations. Applications are drawn from incompressible plane Couette, trailing line vortex flow and compressible Blasius boundary layer flow. Parametric studies indicate a monotonically increasing effect of the Reynolds number on the sensitivity. The phenomenon of eigenvalue sensitivity is due to the non-normality of the operators and their discrete matrix analogs and may be associated with large transient growth of the corresponding initial value problem.

  15. A hydrogeologic framework for characterizing summer streamflow sensitivity to climate warming in the Pacific Northwest, USA

    NASA Astrophysics Data System (ADS)

    Safeeq, M.; Grant, G. E.; Lewis, S. L.; Kramer, M. G.; Staab, B.

    2014-09-01

    Summer streamflows in the Pacific Northwest are largely derived from melting snow and groundwater discharge. As the climate warms, diminishing snowpack and earlier snowmelt will cause reductions in summer streamflow. Most regional-scale assessments of climate change impacts on streamflow use downscaled temperature and precipitation projections from general circulation models (GCMs) coupled with large-scale hydrologic models. Here we develop and apply an analytical hydrogeologic framework for characterizing summer streamflow sensitivity to a change in the timing and magnitude of recharge in a spatially explicit fashion. In particular, we incorporate the role of deep groundwater, which large-scale hydrologic models generally fail to capture, into streamflow sensitivity assessments. We validate our analytical streamflow sensitivities against two empirical measures of sensitivity derived using historical observations of temperature, precipitation, and streamflow from 217 watersheds. In general, empirically and analytically derived streamflow sensitivity values correspond. Although the selected watersheds cover a range of hydrologic regimes (e.g., rain-dominated, mixture of rain and snow, and snow-dominated), sensitivity validation was primarily driven by the snow-dominated watersheds, which are subjected to a wider range of change in recharge timing and magnitude as a result of increased temperature. Overall, two patterns emerge from this analysis: first, areas with high streamflow sensitivity also have higher summer streamflows as compared to low-sensitivity areas. Second, the level of sensitivity and spatial extent of highly sensitive areas diminishes over time as the summer progresses. Results of this analysis point to a robust, practical, and scalable approach that can help assess risk at the landscape scale, complement the downscaling approach, be applied to any climate scenario of interest, and provide a framework to assist land and water managers in adapting to an uncertain and potentially challenging future.

  16. Novel Multidimensional Cross-Correlation Data Comparison Techniques for Spectroscopic Discernment in a Volumetrically Sensitive, Moderating Type Neutron Spectrometer

    NASA Astrophysics Data System (ADS)

    Hoshor, Cory; Young, Stephan; Rogers, Brent; Currie, James; Oakes, Thomas; Scott, Paul; Miller, William; Caruso, Anthony

    2014-03-01

    A novel application of the Pearson Cross-Correlation to neutron spectral discernment in a moderating type neutron spectrometer is introduced. This cross-correlation analysis will be applied to spectral response data collected through both MCNP simulation and empirical measurement by the volumetrically sensitive spectrometer for comparison in 1, 2, and 3 spatial dimensions. The spectroscopic analysis methods discussed will be demonstrated to discern various common spectral and monoenergetic neutron sources.

  17. Optimal frequency-response sensitivity of compressible flow over roughness elements

    NASA Astrophysics Data System (ADS)

    Fosas de Pando, Miguel; Schmid, Peter J.

    2017-04-01

    Compressible flow over a flat plate with two localised and well-separated roughness elements is analysed by global frequency-response analysis. This analysis reveals a sustained feedback loop consisting of a convectively unstable shear-layer instability, triggered at the upstream roughness, and an upstream-propagating acoustic wave, originating at the downstream roughness and regenerating the shear-layer instability at the upstream protrusion. A typical multi-peaked frequency response is recovered from the numerical simulations. In addition, the optimal forcing and response clearly extract the components of this feedback loop and isolate flow regions of pronounced sensitivity and amplification. An efficient parametric-sensitivity framework is introduced and applied to the reference case which shows that first-order increases in Reynolds number and roughness height act destabilising on the flow, while changes in Mach number or roughness separation cause corresponding shifts in the peak frequencies. This information is gained with negligible effort beyond the reference case and can easily be applied to more complex flows.

  18. 40 CFR 459.11 - Specialized definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... definitions, abbreviations and methods of analysis set forth in part 401 of this chapter shall apply to this... as paper prints, slides, negatives, enlargements, movie film and other sensitized materials. ...

  19. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks

    PubMed Central

    Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544

  20. Sensitivity analysis of static resistance of slender beam under bending

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valeš, Jan

    2016-06-08

    The paper deals with statical and sensitivity analyses of resistance of simply supported I-beams under bending. The resistance was solved by geometrically nonlinear finite element method in the programme Ansys. The beams are modelled with initial geometrical imperfections following the first eigenmode of buckling. Imperfections were, together with geometrical characteristics of cross section, and material characteristics of steel, considered as random quantities. The method Latin Hypercube Sampling was applied to evaluate statistical and sensitivity resistance analyses.

  1. Statistical sensitivity analysis of a simple nuclear waste repository model

    NASA Astrophysics Data System (ADS)

    Ronen, Y.; Lucius, J. L.; Blow, E. M.

    1980-06-01

    A preliminary step in a comprehensive sensitivity analysis of the modeling of a nuclear waste repository. The purpose of the complete analysis is to determine which modeling parameters and physical data are most important in determining key design performance criteria and then to obtain the uncertainty in the design for safety considerations. The theory for a statistical screening design methodology is developed for later use in the overall program. The theory was applied to the test case of determining the relative importance of the sensitivity of near field temperature distribution in a single level salt repository to modeling parameters. The exact values of the sensitivities to these physical and modeling parameters were then obtained using direct methods of recalculation. The sensitivity coefficients found to be important for the sample problem were thermal loading, distance between the spent fuel canisters and their radius. Other important parameters were those related to salt properties at a point of interest in the repository.

  2. Currency arbitrage detection using a binary integer programming model

    NASA Astrophysics Data System (ADS)

    Soon, Wanmei; Ye, Heng-Qing

    2011-04-01

    In this article, we examine the use of a new binary integer programming (BIP) model to detect arbitrage opportunities in currency exchanges. This model showcases an excellent application of mathematics to the real world. The concepts involved are easily accessible to undergraduate students with basic knowledge in Operations Research. Through this work, students can learn to link several types of basic optimization models, namely linear programming, integer programming and network models, and apply the well-known sensitivity analysis procedure to accommodate realistic changes in the exchange rates. Beginning with a BIP model, we discuss how it can be reduced to an equivalent but considerably simpler model, where an efficient algorithm can be applied to find the arbitrages and incorporate the sensitivity analysis procedure. A simple comparison is then made with a different arbitrage detection model. This exercise helps students learn to apply basic Operations Research concepts to a practical real-life example, and provides insights into the processes involved in Operations Research model formulations.

  3. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    NASA Astrophysics Data System (ADS)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  4. Global Sensitivity Analysis for Identifying Important Parameters of Nitrogen Nitrification and Denitrification under Model and Scenario Uncertainties

    NASA Astrophysics Data System (ADS)

    Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.

    2017-12-01

    Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.

  5. Quantitative performance targets by using balanced scorecard system: application to waste management and public administration.

    PubMed

    Mendes, Paula; Nunes, Luis Miguel; Teixeira, Margarida Ribau

    2014-09-01

    This article demonstrates how decision-makers can be guided in the process of defining performance target values in the balanced scorecard system. We apply a method based on sensitivity analysis with Monte Carlo simulation to the municipal solid waste management system in Loulé Municipality (Portugal). The method includes two steps: sensitivity analysis of performance indicators to identify those performance indicators with the highest impact on the balanced scorecard model outcomes; and sensitivity analysis of the target values for the previously identified performance indicators. Sensitivity analysis shows that four strategic objectives (IPP1: Comply with the national waste strategy; IPP4: Reduce nonrenewable resources and greenhouse gases; IPP5: Optimize the life-cycle of waste; and FP1: Meet and optimize the budget) alone contribute 99.7% of the variability in overall balanced scorecard value. Thus, these strategic objectives had a much stronger impact on the estimated balanced scorecard outcome than did others, with the IPP1 and the IPP4 accounting for over 55% and 22% of the variance in overall balanced scorecard value, respectively. The remaining performance indicators contribute only marginally. In addition, a change in the value of a single indicator's target value made the overall balanced scorecard value change by as much as 18%. This may lead to involuntarily biased decisions by organizations regarding performance target-setting, if not prevented with the help of methods such as that proposed and applied in this study. © The Author(s) 2014.

  6. A wideband FMBEM for 2D acoustic design sensitivity analysis based on direct differentiation method

    NASA Astrophysics Data System (ADS)

    Chen, Leilei; Zheng, Changjun; Chen, Haibo

    2013-09-01

    This paper presents a wideband fast multipole boundary element method (FMBEM) for two dimensional acoustic design sensitivity analysis based on the direct differentiation method. The wideband fast multipole method (FMM) formed by combining the original FMM and the diagonal form FMM is used to accelerate the matrix-vector products in the boundary element analysis. The Burton-Miller formulation is used to overcome the fictitious frequency problem when using a single Helmholtz boundary integral equation for exterior boundary-value problems. The strongly singular and hypersingular integrals in the sensitivity equations can be evaluated explicitly and directly by using the piecewise constant discretization. The iterative solver GMRES is applied to accelerate the solution of the linear system of equations. A set of optimal parameters for the wideband FMBEM design sensitivity analysis are obtained by observing the performances of the wideband FMM algorithm in terms of computing time and memory usage. Numerical examples are presented to demonstrate the efficiency and validity of the proposed algorithm.

  7. SUBSURFACE RESIDENCE TIMES AS AN ALGORITHM FOR AQUIFER SENSITIVITY MAPPING: TESTING THE CONCEPT WITH GROUND WATER MODELS IN THE CONTENTNEA CREEK BASIN, NORTH CAROLINA, USA

    EPA Science Inventory

    This poster will present a modeling and mapping assessment of landscape sensitivity to non-point source pollution as applied to a hierarchy of catchment drainages in the Coastal Plain of the state of North Carolina. Analysis of the subsurface residence time of water in shallow a...

  8. Reliability Analysis and Reliability-Based Design Optimization of Circular Composite Cylinders Under Axial Compression

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2001-01-01

    This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.

  9. A theoretical-experimental methodology for assessing the sensitivity of biomedical spectral imaging platforms, assays, and analysis methods.

    PubMed

    Leavesley, Silas J; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter; Rich, Thomas C

    2018-01-01

    Spectral imaging technologies have been used for many years by the remote sensing community. More recently, these approaches have been applied to biomedical problems, where they have shown great promise. However, biomedical spectral imaging has been complicated by the high variance of biological data and the reduced ability to construct test scenarios with fixed ground truths. Hence, it has been difficult to objectively assess and compare biomedical spectral imaging assays and technologies. Here, we present a standardized methodology that allows assessment of the performance of biomedical spectral imaging equipment, assays, and analysis algorithms. This methodology incorporates real experimental data and a theoretical sensitivity analysis, preserving the variability present in biomedical image data. We demonstrate that this approach can be applied in several ways: to compare the effectiveness of spectral analysis algorithms, to compare the response of different imaging platforms, and to assess the level of target signature required to achieve a desired performance. Results indicate that it is possible to compare even very different hardware platforms using this methodology. Future applications could include a range of optimization tasks, such as maximizing detection sensitivity or acquisition speed, providing high utility for investigators ranging from design engineers to biomedical scientists. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Sensitivity Analysis for Probabilistic Neural Network Structure Reduction.

    PubMed

    Kowalski, Piotr A; Kusy, Maciej

    2018-05-01

    In this paper, we propose the use of local sensitivity analysis (LSA) for the structure simplification of the probabilistic neural network (PNN). Three algorithms are introduced. The first algorithm applies LSA to the PNN input layer reduction by selecting significant features of input patterns. The second algorithm utilizes LSA to remove redundant pattern neurons of the network. The third algorithm combines the proposed two and constitutes the solution of how they can work together. PNN with a product kernel estimator is used, where each multiplicand computes a one-dimensional Cauchy function. Therefore, the smoothing parameter is separately calculated for each dimension by means of the plug-in method. The classification qualities of the reduced and full structure PNN are compared. Furthermore, we evaluate the performance of PNN, for which global sensitivity analysis (GSA) and the common reduction methods are applied, both in the input layer and the pattern layer. The models are tested on the classification problems of eight repository data sets. A 10-fold cross validation procedure is used to determine the prediction ability of the networks. Based on the obtained results, it is shown that the LSA can be used as an alternative PNN reduction approach.

  11. ASPASIA: A toolkit for evaluating the effects of biological interventions on SBML model behaviour.

    PubMed

    Evans, Stephanie; Alden, Kieran; Cucurull-Sanchez, Lourdes; Larminie, Christopher; Coles, Mark C; Kullberg, Marika C; Timmis, Jon

    2017-02-01

    A calibrated computational model reflects behaviours that are expected or observed in a complex system, providing a baseline upon which sensitivity analysis techniques can be used to analyse pathways that may impact model responses. However, calibration of a model where a behaviour depends on an intervention introduced after a defined time point is difficult, as model responses may be dependent on the conditions at the time the intervention is applied. We present ASPASIA (Automated Simulation Parameter Alteration and SensItivity Analysis), a cross-platform, open-source Java toolkit that addresses a key deficiency in software tools for understanding the impact an intervention has on system behaviour for models specified in Systems Biology Markup Language (SBML). ASPASIA can generate and modify models using SBML solver output as an initial parameter set, allowing interventions to be applied once a steady state has been reached. Additionally, multiple SBML models can be generated where a subset of parameter values are perturbed using local and global sensitivity analysis techniques, revealing the model's sensitivity to the intervention. To illustrate the capabilities of ASPASIA, we demonstrate how this tool has generated novel hypotheses regarding the mechanisms by which Th17-cell plasticity may be controlled in vivo. By using ASPASIA in conjunction with an SBML model of Th17-cell polarisation, we predict that promotion of the Th1-associated transcription factor T-bet, rather than inhibition of the Th17-associated transcription factor RORγt, is sufficient to drive switching of Th17 cells towards an IFN-γ-producing phenotype. Our approach can be applied to all SBML-encoded models to predict the effect that intervention strategies have on system behaviour. ASPASIA, released under the Artistic License (2.0), can be downloaded from http://www.york.ac.uk/ycil/software.

  12. Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A

    2011-01-01

    The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less

  13. Highly sensitive protein detection by biospecific AFM-based fishing with pulsed electrical stimulation.

    PubMed

    Pleshakova, Tatyana O; Malsagova, Kristina A; Kaysheva, Anna L; Kopylov, Arthur T; Tatur, Vadim Yu; Ziborov, Vadim S; Kanashenko, Sergey L; Galiullin, Rafael A; Ivanov, Yuri D

    2017-08-01

    We report here the highly sensitive detection of protein in solution at concentrations from 10 -15 to 10 -18 m using the combination of atomic force microscopy (AFM) and mass spectrometry. Biospecific detection of biotinylated bovine serum albumin was carried out by fishing out the protein onto the surface of AFM chips with immobilized avidin, which determined the specificity of the analysis. Electrical stimulation was applied to enhance the fishing efficiency. A high sensitivity of detection was achieved by application of nanosecond electric pulses to highly oriented pyrolytic graphite placed under the AFM chip. A peristaltic pump-based flow system, which is widely used in routine bioanalytical assays, was employed throughout the analysis. These results hold promise for the development of highly sensitive protein detection methods using nanosensor devices.

  14. A Resampling Analysis of Federal Family Assistance Program Quality Control Data: An Application of the Bootstrap.

    ERIC Educational Resources Information Center

    Hand, Michael L.

    1990-01-01

    Use of the bootstrap resampling technique (BRT) is assessed in its application to resampling analysis associated with measurement of payment allocation errors by federally funded Family Assistance Programs. The BRT is applied to a food stamp quality control database in Oregon. This analysis highlights the outlier-sensitivity of the…

  15. Flows of dioxins and furans in coastal food webs: inverse modeling, sensitivity analysis, and applications of linear system theory.

    PubMed

    Saloranta, Tuomo M; Andersen, Tom; Naes, Kristoffer

    2006-01-01

    Rate constant bioaccumulation models are applied to simulate the flow of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in the coastal marine food web of Frierfjorden, a contaminated fjord in southern Norway. We apply two different ways to parameterize the rate constants in the model, global sensitivity analysis of the models using Extended Fourier Amplitude Sensitivity Test (Extended FAST) method, as well as results from general linear system theory, in order to obtain a more thorough insight to the system's behavior and to the flow pathways of the PCDD/Fs. We calibrate our models against observed body concentrations of PCDD/Fs in the food web of Frierfjorden. Differences between the predictions from the two models (using the same forcing and parameter values) are of the same magnitude as their individual deviations from observations, and the models can be said to perform about equally well in our case. Sensitivity analysis indicates that the success or failure of the models in predicting the PCDD/F concentrations in the food web organisms highly depends on the adequate estimation of the truly dissolved concentrations in water and sediment pore water. We discuss the pros and cons of such models in understanding and estimating the present and future concentrations and bioaccumulation of persistent organic pollutants in aquatic food webs.

  16. A methodology to estimate uncertainty for emission projections through sensitivity analysis.

    PubMed

    Lumbreras, Julio; de Andrés, Juan Manuel; Pérez, Javier; Borge, Rafael; de la Paz, David; Rodríguez, María Encarnación

    2015-04-01

    Air pollution abatement policies must be based on quantitative information on current and future emissions of pollutants. As emission projections uncertainties are inevitable and traditional statistical treatments of uncertainty are highly time/resources consuming, a simplified methodology for nonstatistical uncertainty estimation based on sensitivity analysis is presented in this work. The methodology was applied to the "with measures" scenario for Spain, concretely over the 12 highest emitting sectors regarding greenhouse gas and air pollutants emissions. Examples of methodology application for two important sectors (power plants, and agriculture and livestock) are shown and explained in depth. Uncertainty bands were obtained up to 2020 by modifying the driving factors of the 12 selected sectors and the methodology was tested against a recomputed emission trend in a low economic-growth perspective and official figures for 2010, showing a very good performance. A solid understanding and quantification of uncertainties related to atmospheric emission inventories and projections provide useful information for policy negotiations. However, as many of those uncertainties are irreducible, there is an interest on how they could be managed in order to derive robust policy conclusions. Taking this into account, a method developed to use sensitivity analysis as a source of information to derive nonstatistical uncertainty bands for emission projections is presented and applied to Spain. This method simplifies uncertainty assessment and allows other countries to take advantage of their sensitivity analyses.

  17. Detecting long-term growth trends using tree rings: a critical evaluation of methods.

    PubMed

    Peters, Richard L; Groenendijk, Peter; Vlam, Mart; Zuidema, Pieter A

    2015-05-01

    Tree-ring analysis is often used to assess long-term trends in tree growth. A variety of growth-trend detection methods (GDMs) exist to disentangle age/size trends in growth from long-term growth changes. However, these detrending methods strongly differ in approach, with possible implications for their output. Here, we critically evaluate the consistency, sensitivity, reliability and accuracy of four most widely used GDMs: conservative detrending (CD) applies mathematical functions to correct for decreasing ring widths with age; basal area correction (BAC) transforms diameter into basal area growth; regional curve standardization (RCS) detrends individual tree-ring series using average age/size trends; and size class isolation (SCI) calculates growth trends within separate size classes. First, we evaluated whether these GDMs produce consistent results applied to an empirical tree-ring data set of Melia azedarach, a tropical tree species from Thailand. Three GDMs yielded similar results - a growth decline over time - but the widely used CD method did not detect any change. Second, we assessed the sensitivity (probability of correct growth-trend detection), reliability (100% minus probability of detecting false trends) and accuracy (whether the strength of imposed trends is correctly detected) of these GDMs, by applying them to simulated growth trajectories with different imposed trends: no trend, strong trends (-6% and +6% change per decade) and weak trends (-2%, +2%). All methods except CD, showed high sensitivity, reliability and accuracy to detect strong imposed trends. However, these were considerably lower in the weak or no-trend scenarios. BAC showed good sensitivity and accuracy, but low reliability, indicating uncertainty of trend detection using this method. Our study reveals that the choice of GDM influences results of growth-trend studies. We recommend applying multiple methods when analysing trends and encourage performing sensitivity and reliability analysis. Finally, we recommend SCI and RCS, as these methods showed highest reliability to detect long-term growth trends. © 2014 John Wiley & Sons Ltd.

  18. Label-enhanced surface plasmon resonance applied to label-free interaction analysis of small molecules and fragments.

    PubMed

    Eng, Lars; Nygren-Babol, Linnéa; Hanning, Anders

    2016-10-01

    Surface plasmon resonance (SPR) is a well-established method for studying interactions between small molecules and biomolecules. In particular, SPR is being increasingly applied within fragment-based drug discovery; however, within this application area, the limited sensitivity of SPR may constitute a problem. This problem can be circumvented by the use of label-enhanced SPR that shows a 100-fold higher sensitivity as compared with conventional SPR. Truly label-free interaction data for small molecules can be obtained by applying label-enhanced SPR in a surface competition assay format. The enhanced sensitivity is accompanied by an increased specificity and inertness toward disturbances (e.g., bulk refractive index disturbances). Label-enhanced SPR can be used for fragment screening in a competitive assay format; the competitive format has the added advantage of confirming the specificity of the molecular interaction. In addition, label-enhanced SPR extends the accessible kinetic regime of SPR to the analysis of very fast fragment binding kinetics. In this article, we demonstrate the working principles and benchmark the performance of label-enhanced SPR in a model system-the interaction between carbonic anhydrase II and a number of small-molecule sulfonamide-based inhibitors. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. An incremental strategy for calculating consistent discrete CFD sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Korivi, Vamshi Mohan; Taylor, Arthur C., III; Newman, Perry A.; Hou, Gene W.; Jones, Henry E.

    1992-01-01

    In this preliminary study involving advanced computational fluid dynamic (CFD) codes, an incremental formulation, also known as the 'delta' or 'correction' form, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods appear to be needed for future 3D applications; however, because direct solver methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form result in certain difficulties, such as ill-conditioning of the coefficient matrix, which can be overcome when these equations are cast in the incremental form; these and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two laminar sample problems: (1) transonic flow through a double-throat nozzle; and (2) flow over an isolated airfoil.

  20. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    NASA Astrophysics Data System (ADS)

    Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.

    2018-03-01

    We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  1. Risk-sensitive reinforcement learning.

    PubMed

    Shen, Yun; Tobia, Michael J; Sommer, Tobias; Obermayer, Klaus

    2014-07-01

    We derive a family of risk-sensitive reinforcement learning methods for agents, who face sequential decision-making tasks in uncertain environments. By applying a utility function to the temporal difference (TD) error, nonlinear transformations are effectively applied not only to the received rewards but also to the true transition probabilities of the underlying Markov decision process. When appropriate utility functions are chosen, the agents' behaviors express key features of human behavior as predicted by prospect theory (Kahneman & Tversky, 1979 ), for example, different risk preferences for gains and losses, as well as the shape of subjective probability curves. We derive a risk-sensitive Q-learning algorithm, which is necessary for modeling human behavior when transition probabilities are unknown, and prove its convergence. As a proof of principle for the applicability of the new framework, we apply it to quantify human behavior in a sequential investment task. We find that the risk-sensitive variant provides a significantly better fit to the behavioral data and that it leads to an interpretation of the subject's responses that is indeed consistent with prospect theory. The analysis of simultaneously measured fMRI signals shows a significant correlation of the risk-sensitive TD error with BOLD signal change in the ventral striatum. In addition we find a significant correlation of the risk-sensitive Q-values with neural activity in the striatum, cingulate cortex, and insula that is not present if standard Q-values are used.

  2. Enhanced electrochemical nanoring electrode for analysis of cytosol in single cells.

    PubMed

    Zhuang, Lihong; Zuo, Huanzhen; Wu, Zengqiang; Wang, Yu; Fang, Danjun; Jiang, Dechen

    2014-12-02

    A microelectrode array has been applied for single cell analysis with relatively high throughput; however, the cells were typically cultured on the microelectrodes under cell-size microwell traps leading to the difficulty in the functionalization of an electrode surface for higher detection sensitivity. Here, nanoring electrodes embedded under the microwell traps were fabricated to achieve the isolation of the electrode surface and the cell support, and thus, the electrode surface can be modified to obtain enhanced electrochemical sensitivity for single cell analysis. Moreover, the nanometer-sized electrode permitted a faster diffusion of analyte to the surface for additional improvement in the sensitivity, which was evidenced by the electrochemical characterization and the simulation. To demonstrate the concept of the functionalized nanoring electrode for single cell analysis, the electrode surface was deposited with prussian blue to detect intracellular hydrogen peroxide at a single cell. Hundreds of picoamperes were observed on our functionalized nanoring electrode exhibiting the enhanced electrochemical sensitivity. The success in the achievement of a functionalized nanoring electrode will benefit the development of high throughput single cell electrochemical analysis.

  3. Sensitivity Analysis and Uncertainty Quantification for the LAMMPS Molecular Dynamics Simulation Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Picard, Richard Roy; Bhat, Kabekode Ghanasham

    2017-07-18

    We examine sensitivity analysis and uncertainty quantification for molecular dynamics simulation. Extreme (large or small) output values for the LAMMPS code often occur at the boundaries of input regions, and uncertainties in those boundary values are overlooked by common SA methods. Similarly, input values for which code outputs are consistent with calibration data can also occur near boundaries. Upon applying approaches in the literature for imprecise probabilities (IPs), much more realistic results are obtained than for the complacent application of standard SA and code calibration.

  4. Ion-Sensitive Field-Effect Transistor for Biological Sensing

    PubMed Central

    Lee, Chang-Soo; Kim, Sang Kyu; Kim, Moonil

    2009-01-01

    In recent years there has been great progress in applying FET-type biosensors for highly sensitive biological detection. Among them, the ISFET (ion-sensitive field-effect transistor) is one of the most intriguing approaches in electrical biosensing technology. Here, we review some of the main advances in this field over the past few years, explore its application prospects, and discuss the main issues, approaches, and challenges, with the aim of stimulating a broader interest in developing ISFET-based biosensors and extending their applications for reliable and sensitive analysis of various biomolecules such as DNA, proteins, enzymes, and cells. PMID:22423205

  5. Highly sensitive molecular diagnosis of prostate cancer using surplus material washed off from biopsy needles

    PubMed Central

    Bermudo, R; Abia, D; Mozos, A; García-Cruz, E; Alcaraz, A; Ortiz, Á R; Thomson, T M; Fernández, P L

    2011-01-01

    Introduction: Currently, final diagnosis of prostate cancer (PCa) is based on histopathological analysis of needle biopsies, but this process often bears uncertainties due to small sample size, tumour focality and pathologist's subjective assessment. Methods: Prostate cancer diagnostic signatures were generated by applying linear discriminant analysis to microarray and real-time RT–PCR (qRT–PCR) data from normal and tumoural prostate tissue samples. Additionally, after removal of biopsy tissues, material washed off from transrectal biopsy needles was used for molecular profiling and discriminant analysis. Results: Linear discriminant analysis applied to microarray data for a set of 318 genes differentially expressed between non-tumoural and tumoural prostate samples produced 26 gene signatures, which classified the 84 samples used with 100% accuracy. To identify signatures potentially useful for the diagnosis of prostate biopsies, surplus material washed off from routine biopsy needles from 53 patients was used to generate qRT–PCR data for a subset of 11 genes. This analysis identified a six-gene signature that correctly assigned the biopsies as benign or tumoural in 92.6% of the cases, with 88.8% sensitivity and 96.1% specificity. Conclusion: Surplus material from prostate needle biopsies can be used for minimal-size gene signature analysis for sensitive and accurate discrimination between non-tumoural and tumoural prostates, without interference with current diagnostic procedures. This approach could be a useful adjunct to current procedures in PCa diagnosis. PMID:22009027

  6. Optical skin friction measurement technique in hypersonic wind tunnel

    NASA Astrophysics Data System (ADS)

    Chen, Xing; Yao, Dapeng; Wen, Shuai; Pan, Junjie

    2016-10-01

    Shear-sensitive liquid-crystal coatings (SSLCCs) have an optical characteristic that they are sensitive to the applied shear stress. Based on this, a novel technique is developed to measure the applied shear stress of the model surface regarding both its magnitude and direction in hypersonic flow. The system of optical skin friction measurement are built in China Academy of Aerospace Aerodynamics (CAAA). A series of experiments of hypersonic vehicle is performed in wind tunnel of CAAA. Global skin friction distribution of the model which shows complicated flow structures is discussed, and a brief mechanism analysis and an evaluation on optical measurement technique have been made.

  7. Ecological Sensitivity Evaluation of Tourist Region Based on Remote Sensing Image - Taking Chaohu Lake Area as a Case Study

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Li, W. J.; Yu, J.; Wu, C. Z.

    2018-04-01

    Remote sensing technology is of significant advantages for monitoring and analysing ecological environment. By using of automatic extraction algorithm, various environmental resources information of tourist region can be obtained from remote sensing imagery. Combining with GIS spatial analysis and landscape pattern analysis, relevant environmental information can be quantitatively analysed and interpreted. In this study, taking the Chaohu Lake Basin as an example, Landsat-8 multi-spectral satellite image of October 2015 was applied. Integrated the automatic ELM (Extreme Learning Machine) classification results with the data of digital elevation model and slope information, human disturbance degree, land use degree, primary productivity, landscape evenness , vegetation coverage, DEM, slope and normalized water body index were used as the evaluation factors to construct the eco-sensitivity evaluation index based on AHP and overlay analysis. According to the value of eco-sensitivity evaluation index, by using of GIS technique of equal interval reclassification, the Chaohu Lake area was divided into four grades: very sensitive area, sensitive area, sub-sensitive areas and insensitive areas. The results of the eco-sensitivity analysis shows: the area of the very sensitive area was 4577.4378 km2, accounting for about 37.12 %, the sensitive area was 5130.0522 km2, accounting for about 37.12 %; the area of sub-sensitive area was 3729.9312 km2, accounting for 26.99 %; the area of insensitive area was 382.4399 km2, accounting for about 2.77 %. At the same time, it has been found that there were spatial differences in ecological sensitivity of the Chaohu Lake basin. The most sensitive areas were mainly located in the areas with high elevation and large terrain gradient. Insensitive areas were mainly distributed in slope of the slow platform area; the sensitive areas and the sub-sensitive areas were mainly agricultural land and woodland. Through the eco-sensitivity analysis of the study area, the automatic recognition and analysis techniques for remote sensing imagery are integrated into the ecological analysis and ecological regional planning, which can provide a reliable scientific basis for rational planning and regional sustainable development of the Chaohu Lake tourist area.

  8. A blocking primer increases specificity in environmental DNA detection of bull trout (Salvelinus confluentus)

    Treesearch

    Taylor M. Wilcox; Michael K. Schwartz; Kevin S. McKelvey; Michael K. Young; Winsor H. Lowe

    2014-01-01

    Environmental DNA (eDNA) is increasingly applied as a highly sensitive way to detect aquatic animals non-invasively. However, distinguishing closely related taxa can be particularly challenging. Previous studies of ancient DNA and genetic diet analysis have used blocking primers to enrich target template in the presence of abundant, non-target DNA. Here we apply a...

  9. Convergence Estimates for Multidisciplinary Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Arian, Eyal

    1997-01-01

    A quantitative analysis of coupling between systems of equations is introduced. This analysis is then applied to problems in multidisciplinary analysis, sensitivity, and optimization. For the sensitivity and optimization problems both multidisciplinary and single discipline feasibility schemes are considered. In all these cases a "convergence factor" is estimated in terms of the Jacobians and Hessians of the system, thus it can also be approximated by existing disciplinary analysis and optimization codes. The convergence factor is identified with the measure for the "coupling" between the disciplines in the system. Applications to algorithm development are discussed. Demonstration of the convergence estimates and numerical results are given for a system composed of two non-linear algebraic equations, and for a system composed of two PDEs modeling aeroelasticity.

  10. Probabilistic Sensitivity Analysis for Launch Vehicles with Varying Payloads and Adapters for Structural Dynamics and Loads

    NASA Technical Reports Server (NTRS)

    McGhee, David S.; Peck, Jeff A.; McDonald, Emmett J.

    2012-01-01

    This paper examines Probabilistic Sensitivity Analysis (PSA) methods and tools in an effort to understand their utility in vehicle loads and dynamic analysis. Specifically, this study addresses how these methods may be used to establish limits on payload mass and cg location and requirements on adaptor stiffnesses while maintaining vehicle loads and frequencies within established bounds. To this end, PSA methods and tools are applied to a realistic, but manageable, integrated launch vehicle analysis where payload and payload adaptor parameters are modeled as random variables. This analysis is used to study both Regional Response PSA (RRPSA) and Global Response PSA (GRPSA) methods, with a primary focus on sampling based techniques. For contrast, some MPP based approaches are also examined.

  11. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  12. Design sensitivity analysis of rotorcraft airframe structures for vibration reduction

    NASA Technical Reports Server (NTRS)

    Murthy, T. Sreekanta

    1987-01-01

    Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.

  13. Application of a sensitivity analysis technique to high-order digital flight control systems

    NASA Technical Reports Server (NTRS)

    Paduano, James D.; Downing, David R.

    1987-01-01

    A sensitivity analysis technique for multiloop flight control systems is studied. This technique uses the scaled singular values of the return difference matrix as a measure of the relative stability of a control system. It then uses the gradients of these singular values with respect to system and controller parameters to judge sensitivity. The sensitivity analysis technique is first reviewed; then it is extended to include digital systems, through the derivation of singular-value gradient equations. Gradients with respect to parameters which do not appear explicitly as control-system matrix elements are also derived, so that high-order systems can be studied. A complete review of the integrated technique is given by way of a simple example: the inverted pendulum problem. The technique is then demonstrated on the X-29 control laws. Results show linear models of real systems can be analyzed by this sensitivity technique, if it is applied with care. A computer program called SVA was written to accomplish the singular-value sensitivity analysis techniques. Thus computational methods and considerations form an integral part of many of the discussions. A user's guide to the program is included. The SVA is a fully public domain program, running on the NASA/Dryden Elxsi computer.

  14. Global sensitivity analysis of multiscale properties of porous materials

    NASA Astrophysics Data System (ADS)

    Um, Kimoon; Zhang, Xuan; Katsoulakis, Markos; Plechac, Petr; Tartakovsky, Daniel M.

    2018-02-01

    Ubiquitous uncertainty about pore geometry inevitably undermines the veracity of pore- and multi-scale simulations of transport phenomena in porous media. It raises two fundamental issues: sensitivity of effective material properties to pore-scale parameters and statistical parameterization of Darcy-scale models that accounts for pore-scale uncertainty. Homogenization-based maps of pore-scale parameters onto their Darcy-scale counterparts facilitate both sensitivity analysis (SA) and uncertainty quantification. We treat uncertain geometric characteristics of a hierarchical porous medium as random variables to conduct global SA and to derive probabilistic descriptors of effective diffusion coefficients and effective sorption rate. Our analysis is formulated in terms of solute transport diffusing through a fluid-filled pore space, while sorbing to the solid matrix. Yet it is sufficiently general to be applied to other multiscale porous media phenomena that are amenable to homogenization.

  15. Skeletal Mechanism Generation of Surrogate Jet Fuels for Aeropropulsion Modeling

    NASA Astrophysics Data System (ADS)

    Sung, Chih-Jen; Niemeyer, Kyle E.

    2010-05-01

    A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with skeletal reductions of two important hydrocarbon components, n-heptane and n-decane, relevant to surrogate jet fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each previous method, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal.

  16. Sensitive and comprehensive analysis of O-glycosylation in biotherapeutics: a case study of novel erythropoiesis stimulating protein.

    PubMed

    Kim, Unyong; Oh, Myung Jin; Seo, Youngsuk; Jeon, Yinae; Eom, Joon-Ho; An, Hyun Joo

    2017-09-01

    Glycosylation of recombinant human erythropoietins (rhEPOs) is significantly associated with drug's quality and potency. Thus, comprehensive characterization of glycosylation is vital to assess the biotherapeutic quality and establish the equivalency of biosimilar rhEPOs. However, current glycan analysis mainly focuses on the N-glycans due to the absence of analytical tools to liberate O-glycans with high sensitivity. We developed selective and sensitive method to profile native O-glycans on rhEPOs. O-glycosylation on rhEPO including O-acetylation on a sialic acid was comprehensively characterized. Details such as O-glycan structure and O-acetyl-modification site were obtained from tandem MS. This method may be applied to QC and batch analysis of not only rhEPOs but also other biotherapeutics bearing multiple O-glycosylations.

  17. Micropollutants throughout an integrated urban drainage model: Sensitivity and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Mannina, Giorgio; Cosenza, Alida; Viviani, Gaspare

    2017-11-01

    The paper presents the sensitivity and uncertainty analysis of an integrated urban drainage model which includes micropollutants. Specifically, a bespoke integrated model developed in previous studies has been modified in order to include the micropollutant assessment (namely, sulfamethoxazole - SMX). The model takes into account also the interactions between the three components of the system: sewer system (SS), wastewater treatment plant (WWTP) and receiving water body (RWB). The analysis has been applied to an experimental catchment nearby Palermo (Italy): the Nocella catchment. Overall, five scenarios, each characterized by different uncertainty combinations of sub-systems (i.e., SS, WWTP and RWB), have been considered applying, for the sensitivity analysis, the Extended-FAST method in order to select the key factors affecting the RWB quality and to design a reliable/useful experimental campaign. Results have demonstrated that sensitivity analysis is a powerful tool for increasing operator confidence in the modelling results. The approach adopted here can be used for blocking some non-identifiable factors, thus wisely modifying the structure of the model and reducing the related uncertainty. The model factors related to the SS have been found to be the most relevant factors affecting the SMX modeling in the RWB when all model factors (scenario 1) or model factors of SS (scenarios 2 and 3) are varied. If the only factors related to the WWTP are changed (scenarios 4 and 5), the SMX concentration in the RWB is mainly influenced (till to 95% influence of the total variance for SSMX,max) by the aerobic sorption coefficient. A progressive uncertainty reduction from the upstream to downstream was found for the soluble fraction of SMX in the RWB.

  18. Examining the accuracy of the infinite order sudden approximation using sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Eno, Larry; Rabitz, Herschel

    1981-08-01

    A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix SIOS with respect to a parameter which reintroduces the internal energy operator ?0 into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (?0 in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result is obtained for the effect of ?0 on SIOS. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H2 system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed.

  19. Sensitivity field distributions for segmental bioelectrical impedance analysis based on real human anatomy

    NASA Astrophysics Data System (ADS)

    Danilov, A. A.; Kramarenko, V. K.; Nikolaev, D. V.; Rudnev, S. G.; Salamatova, V. Yu; Smirnov, A. V.; Vassilevski, Yu V.

    2013-04-01

    In this work, an adaptive unstructured tetrahedral mesh generation technology is applied for simulation of segmental bioimpedance measurements using high-resolution whole-body model of the Visible Human Project man. Sensitivity field distributions for a conventional tetrapolar, as well as eight- and ten-electrode measurement configurations are obtained. Based on the ten-electrode configuration, we suggest an algorithm for monitoring changes in the upper lung area.

  20. Civil and mechanical engineering applications of sensitivity analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Komkov, V.

    1985-07-01

    In this largely tutorial presentation, the historical development of optimization theories has been outlined as they applied to mechanical and civil engineering designs and the development of modern sensitivity techniques during the last 20 years has been traced. Some of the difficulties and the progress made in overcoming them have been outlined. Some of the recently developed theoretical methods have been stressed to indicate their importance to computer-aided design technology.

  1. Identification of stochastic interactions in nonlinear models of structural mechanics

    NASA Astrophysics Data System (ADS)

    Kala, Zdeněk

    2017-07-01

    In the paper, the polynomial approximation is presented by which the Sobol sensitivity analysis can be evaluated with all sensitivity indices. The nonlinear FEM model is approximated. The input area is mapped using simulations runs of Latin Hypercube Sampling method. The domain of the approximation polynomial is chosen so that it were possible to apply large number of simulation runs of Latin Hypercube Sampling method. The method presented also makes possible to evaluate higher-order sensitivity indices, which could not be identified in case of nonlinear FEM.

  2. Phase quantification by X-ray photoemission valence band analysis applied to mixed phase TiO2 powders

    NASA Astrophysics Data System (ADS)

    Breeson, Andrew C.; Sankar, Gopinathan; Goh, Gregory K. L.; Palgrave, Robert G.

    2017-11-01

    A method of quantitative phase analysis using valence band X-ray photoelectron spectra is presented and applied to the analysis of TiO2 anatase-rutile mixtures. The valence band spectra of pure TiO2 polymorphs were measured, and these spectral shapes used to fit valence band spectra from mixed phase samples. Given the surface sensitive nature of the technique, this yields a surface phase fraction. Mixed phase samples were prepared from high and low surface area anatase and rutile powders. In the samples studied here, the surface phase fraction of anatase was found to be linearly correlated with photocatalytic activity of the mixed phase samples, even for samples with very different anatase and rutile surface areas. We apply this method to determine the surface phase fraction of P25 powder. This method may be applied to other systems where a surface phase fraction is an important characteristic.

  3. Estimating causal contrasts involving intermediate variables in the presence of selection bias.

    PubMed

    Valeri, Linda; Coull, Brent A

    2016-11-20

    An important goal across the biomedical and social sciences is the quantification of the role of intermediate factors in explaining how an exposure exerts an effect on an outcome. Selection bias has the potential to severely undermine the validity of inferences on direct and indirect causal effects in observational as well as in randomized studies. The phenomenon of selection may arise through several mechanisms, and we here focus on instances of missing data. We study the sign and magnitude of selection bias in the estimates of direct and indirect effects when data on any of the factors involved in the analysis is either missing at random or not missing at random. Under some simplifying assumptions, the bias formulae can lead to nonparametric sensitivity analyses. These sensitivity analyses can be applied to causal effects on the risk difference and risk-ratio scales irrespectively of the estimation approach employed. To incorporate parametric assumptions, we also develop a sensitivity analysis for selection bias in mediation analysis in the spirit of the expectation-maximization algorithm. The approaches are applied to data from a health disparities study investigating the role of stage at diagnosis on racial disparities in colorectal cancer survival. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Four dimensional data assimilation (FDDA) impacts on WRF performance in simulating inversion layer structure and distributions of CMAQ-simulated winter ozone concentrations in Uintah Basin

    NASA Astrophysics Data System (ADS)

    Tran, Trang; Tran, Huy; Mansfield, Marc; Lyman, Seth; Crosman, Erik

    2018-03-01

    Four-dimensional data assimilation (FDDA) was applied in WRF-CMAQ model sensitivity tests to study the impact of observational and analysis nudging on model performance in simulating inversion layers and O3 concentration distributions within the Uintah Basin, Utah, U.S.A. in winter 2013. Observational nudging substantially improved WRF model performance in simulating surface wind fields, correcting a 10 °C warm surface temperature bias, correcting overestimation of the planetary boundary layer height (PBLH) and correcting underestimation of inversion strengths produced by regular WRF model physics without nudging. However, the combined effects of poor performance of WRF meteorological model physical parameterization schemes in simulating low clouds, and warm and moist biases in the temperature and moisture initialization and subsequent simulation fields, likely amplified the overestimation of warm clouds during inversion days when observational nudging was applied, impacting the resulting O3 photochemical formation in the chemistry model. To reduce the impact of a moist bias in the simulations on warm cloud formation, nudging with the analysis water mixing ratio above the planetary boundary layer (PBL) was applied. However, due to poor analysis vertical temperature profiles, applying analysis nudging also increased the errors in the modeled inversion layer vertical structure compared to observational nudging. Combining both observational and analysis nudging methods resulted in unrealistically extreme stratified stability that trapped pollutants at the lowest elevations at the center of the Uintah Basin and yielded the worst WRF performance in simulating inversion layer structure among the four sensitivity tests. The results of this study illustrate the importance of carefully considering the representativeness and quality of the observational and model analysis data sets when applying nudging techniques within stable PBLs, and the need to evaluate model results on a basin-wide scale.

  5. Application of a cholesterol stationary phase in the analysis of phosphorothioate oligonucleotides by means of ion pair chromatography coupled with tandem mass spectrometry.

    PubMed

    Studzińska, Sylwia; Krzemińska, Katarzyna; Szumski, Michał; Buszewski, Bogusław

    2016-07-01

    The main aim of this study was the investigation of the influence of several ion pair reagents towards both the retention and the mass spectrometry sensitivity of phosphorothioate oligonucleotides. A cholesterol stationary phase was applied for the first time in the analysis of this group of compounds. The mobile phase composition was modified by changing the concentration and the type of amines and acetates or 1,1,1,3,3,3-hexafluoroisopropanol. It has been shown that the increase of amines concentration results in the retention factor increase for each oligonucleotide, on each adsorbent. The only exception was the mobile phase composed of triethylamine and 1,1,1,3,3,3-hexafluoroisopropanol. This is a consequence of interactions taking place between a cholesterol molecule and an alcohol. This effect was convenient when the mass spectrometry detection was applied, since it allowed an increase in the sensitivity. Moreover, optimization of the mobile phase composition and its impact on the efficiency of ionization process and on the sensitivity in mass spectrometry were also presented. The optimization of this new method, based on cholesterol stationary phase coupled with mass spectrometry detection, was finally applied for the determination of phosphorothioate oligonucleotides impurity in a real sample. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Application of support vector machine method for the analysis of absorption spectra of exhaled air of patients with broncho-pulmonary diseases

    NASA Astrophysics Data System (ADS)

    Bukreeva, Ekaterina B.; Bulanova, Anna A.; Kistenev, Yury V.; Kuzmin, Dmitry A.; Tuzikov, Sergei A.; Yumov, Evgeny L.

    2014-11-01

    The results of the joint use of laser photoacoustic spectroscopy and chemometrics methods in gas analysis of exhaled air of patients with respiratory diseases (chronic obstructive pulmonary disease, pneumonia and lung cancer) are presented. The absorption spectra of exhaled breath of all volunteers were measured, the classification methods of the scans of the absorption spectra were applied, the sensitivity/specificity of the classification results were determined. It were obtained a result of nosological in pairs classification for all investigated volunteers, indices of sensitivity and specificity.

  7. Elemental Analysis in Biological Matrices Using ICP-MS.

    PubMed

    Hansen, Matthew N; Clogston, Jeffrey D

    2018-01-01

    The increasing exploration of metallic nanoparticles for use as cancer therapeutic agents necessitates a sensitive technique to track the clearance and distribution of the material once introduced into a living system. Inductively coupled plasma mass spectrometry (ICP-MS) provides a sensitive and selective tool for tracking the distribution of metal components from these nanotherapeutics. This chapter presents a standardized method for processing biological matrices, ensuring complete homogenization of tissues, and outlines the preparation of appropriate standards and controls. The method described herein utilized gold nanoparticle-treated samples; however, the method can easily be applied to the analysis of other metals.

  8. Fluorescence-labeled methylation-sensitive amplified fragment length polymorphism (FL-MS-AFLP) analysis for quantitative determination of DNA methylation and demethylation status.

    PubMed

    Kageyama, Shinji; Shinmura, Kazuya; Yamamoto, Hiroko; Goto, Masanori; Suzuki, Koichi; Tanioka, Fumihiko; Tsuneyoshi, Toshihiro; Sugimura, Haruhiko

    2008-04-01

    The PCR-based DNA fingerprinting method called the methylation-sensitive amplified fragment length polymorphism (MS-AFLP) analysis is used for genome-wide scanning of methylation status. In this study, we developed a method of fluorescence-labeled MS-AFLP (FL-MS-AFLP) analysis by applying a fluorescence-labeled primer and fluorescence-detecting electrophoresis apparatus to the existing method of MS-AFLP analysis. The FL-MS-AFLP analysis enables quantitative evaluation of more than 350 random CpG loci per run. It was shown to allow evaluation of the differences in methylation level of blood DNA of gastric cancer patients and evaluation of hypermethylation and hypomethylation in DNA from gastric cancer tissue in comparison with adjacent non-cancerous tissue.

  9. Study of water based nanofluid flows in annular tubes using numerical simulation and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Siadaty, Moein; Kazazi, Mohsen

    2018-04-01

    Convective heat transfer, entropy generation and pressure drop of two water based nanofluids (Cu-water and Al2O3-water) in horizontal annular tubes are scrutinized by means of computational fluids dynamics, response surface methodology and sensitivity analysis. First, central composite design is used to perform a series of experiments with diameter ratio, length to diameter ratio, Reynolds number and solid volume fraction. Then, CFD is used to calculate the Nusselt Number, Euler number and entropy generation. After that, RSM is applied to fit second order polynomials on responses. Finally, sensitivity analysis is conducted to manage the above mentioned parameters inside tube. Totally, 62 different cases are examined. CFD results show that Cu-water and Al2O3-water have the highest and lowest heat transfer rate, respectively. In addition, analysis of variances indicates that increase in solid volume fraction increases dimensionless pressure drop for Al2O3-water. Moreover, it has a significant negative and insignificant effects on Cu-water Nusselt and Euler numbers, respectively. Analysis of Bejan number indicates that frictional and thermal entropy generations are the dominant irreversibility in Al2O3-water and Cu-water flows, respectively. Sensitivity analysis indicates dimensionless pressure drop sensitivity to tube length for Cu-water is independent of its diameter ratio at different Reynolds numbers.

  10. Instructional Psychology 1976 - 1981,

    DTIC Science & Technology

    1982-06-01

    business it is to carry out applied work in the design of instructional content and delivery. These organizations include specialized divisions of...34learning disabilities" label: An experimental analysis. Comtemporary Educational Psychology, 1977, 2, 292-297. Allington, R. L. Sensitivity to

  11. Global Sensitivity Applied to Dynamic Combined Finite Discrete Element Methods for Fracture Simulation

    NASA Astrophysics Data System (ADS)

    Godinez, H. C.; Rougier, E.; Osthus, D.; Srinivasan, G.

    2017-12-01

    Fracture propagation play a key role for a number of application of interest to the scientific community. From dynamic fracture processes like spall and fragmentation in metals and detection of gas flow in static fractures in rock and the subsurface, the dynamics of fracture propagation is important to various engineering and scientific disciplines. In this work we implement a global sensitivity analysis test to the Hybrid Optimization Software Suite (HOSS), a multi-physics software tool based on the combined finite-discrete element method, that is used to describe material deformation and failure (i.e., fracture and fragmentation) under a number of user-prescribed boundary conditions. We explore the sensitivity of HOSS for various model parameters that influence how fracture are propagated through a material of interest. The parameters control the softening curve that the model relies to determine fractures within each element in the mesh, as well a other internal parameters which influence fracture behavior. The sensitivity method we apply is the Fourier Amplitude Sensitivity Test (FAST), which is a global sensitivity method to explore how each parameter influence the model fracture and to determine the key model parameters that have the most impact on the model. We present several sensitivity experiments for different combination of model parameters and compare against experimental data for verification.

  12. Renewable Energy Deployment in Colorado and the West: A Modeling Sensitivity and GIS Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrows, Clayton; Mai, Trieu; Haase, Scott

    2016-03-01

    The Resource Planning Model is a capacity expansion model designed for a regional power system, such as a utility service territory, state, or balancing authority. We apply a geospatial analysis to Resource Planning Model renewable energy capacity expansion results to understand the likelihood of renewable development on various lands within Colorado.

  13. Sobol‧ sensitivity analysis of NAPL-contaminated aquifer remediation process based on multiple surrogates

    NASA Astrophysics Data System (ADS)

    Luo, Jiannan; Lu, Wenxi

    2014-06-01

    Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.

  14. Anisotropic analysis for seismic sensitivity of groundwater monitoring wells

    NASA Astrophysics Data System (ADS)

    Pan, Y.; Hsu, K.

    2011-12-01

    Taiwan is located at the boundaries of Eurasian Plate and the Philippine Sea Plate. The movement of plate causes crustal uplift and lateral deformation to lead frequent earthquakes in the vicinity of Taiwan. The change of groundwater level trigged by earthquake has been observed and studied in Taiwan for many years. The change of groundwater may appear in oscillation and step changes. The former is caused by seismic waves. The latter is caused by the volumetric strain and reflects the strain status. Since the setting of groundwater monitoring well is easier and cheaper than the setting of strain gauge, the groundwater measurement may be used as a indication of stress. This research proposes the concept of seismic sensitivity of groundwater monitoring well and apply to DonHer station in Taiwan. Geostatistical method is used to analysis the anisotropy of seismic sensitivity. GIS is used to map the sensitive area of the existing groundwater monitoring well.

  15. SENSITIVITY OF BLIND PULSAR SEARCHES WITH THE FERMI LARGE AREA TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dormody, M.; Johnson, R. P.; Atwood, W. B.

    2011-12-01

    We quantitatively establish the sensitivity to the detection of young to middle-aged, isolated, gamma-ray pulsars through blind searches of Fermi Large Area Telescope (LAT) data using a Monte Carlo simulation. We detail a sensitivity study of the time-differencing blind search code used to discover gamma-ray pulsars in the first year of observations. We simulate 10,000 pulsars across a broad parameter space and distribute them across the sky. We replicate the analysis in the Fermi LAT First Source Catalog to localize the sources, and the blind search analysis to find the pulsars. We analyze the results and discuss the effect ofmore » positional error and spin frequency on gamma-ray pulsar detections. Finally, we construct a formula to determine the sensitivity of the blind search and present a sensitivity map assuming a standard set of pulsar parameters. The results of this study can be applied to population studies and are useful in characterizing unidentified LAT sources.« less

  16. Time-Distance Analysis of Deep Solar Convection

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.; Hanasoge, S. M.

    2011-01-01

    Recently it was shown by Hanasoge, Duvall, and DeRosa (2010) that the upper limit to convective flows for spherical harmonic degrees l

  17. Negative electrospray ionization on porous supporting tips for mass spectrometric analysis: electrostatic charging effect on detection sensitivity and its application to explosive detection.

    PubMed

    Wong, Melody Yee-Man; Man, Sin-Heng; Che, Chi-Ming; Lau, Kai-Chung; Ng, Kwan-Ming

    2014-03-21

    The simplicity and easy manipulation of a porous substrate-based ESI-MS technique have been widely applied to the direct analysis of different types of samples in positive ion mode. However, the study and application of this technique in negative ion mode are sparse. A key challenge could be due to the ease of electrical discharge on supporting tips upon the application of negative voltage. The aim of this study is to investigate the effect of supporting materials, including polyester, polyethylene and wood, on the detection sensitivity of a porous substrate-based negative ESI-MS technique. By using nitrobenzene derivatives and nitrophenol derivatives as the target analytes, it was found that the hydrophobic materials (i.e., polyethylene and polyester) with a higher tendency to accumulate negative charge could enhance the detection sensitivity towards nitrobenzene derivatives via electron-capture ionization; whereas, compounds with electron affinities lower than the cut-off value (1.13 eV) were not detected. Nitrophenol derivatives with pKa smaller than 9.0 could be detected in the form of deprotonated ions; whereas polar materials (i.e., wood), which might undergo competitive deprotonation with the analytes, could suppress the detection sensitivity. With the investigation of the material effects on the detection sensitivity, the porous substrate-based negative ESI-MS method was developed and applied to the direct detection of two commonly encountered explosives in complex samples.

  18. MEMS-based Force-clamp Analysis of the Role of Body Stiffness in C. elegans Touch Sensation

    PubMed Central

    Petzold, Bryan C.; Park, Sung-Jin; Mazzochette, Eileen A.; Goodman, Miriam B.; Pruitt, Beth L.

    2013-01-01

    Touch is enabled by mechanoreceptor neurons in the skin and plays an essential role in our everyday lives, but is among the least understood of our five basic senses. Force applied to the skin deforms these neurons and activates ion channels within them. Despite the importance of the mechanics of the skin in determining mechanoreceptor neuron deformation and ultimately touch sensation, the role of mechanics in touch sensitivity is poorly understood. Here, we use the model organism Caenorhabditis elegans to directly test the hypothesis that body mechanics modulate touch sensitivity. We demonstrate a microelectromechanical system (MEMS)-based force clamp that can apply calibrated forces to freely crawling C. elegans worms and measure touch-evoked avoidance responses. This approach reveals that wild-type animals sense forces < 1 μN and indentation depths < 1 μm. We use both genetic manipulation of the skin and optogenetic modulation of body wall muscles to alter body mechanics. We find that small changes in body stiffness dramatically affect force sensitivity, while having only modest effects on indentation sensitivity. We investigate the theoretical body deformation predicted under applied force and conclude that local mechanical loads induce inward bending deformation of the skin to drive touch sensation in C. elegans. PMID:23598612

  19. A framework for sensitivity analysis of decision trees.

    PubMed

    Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław

    2018-01-01

    In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Haitao, E-mail: liaoht@cae.ac.cn

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less

  1. A sensitivity analysis method for the body segment inertial parameters based on ground reaction and joint moment regressor matrices.

    PubMed

    Futamure, Sumire; Bonnet, Vincent; Dumas, Raphael; Venture, Gentiane

    2017-11-07

    This paper presents a method allowing a simple and efficient sensitivity analysis of dynamics parameters of complex whole-body human model. The proposed method is based on the ground reaction and joint moment regressor matrices, developed initially in robotics system identification theory, and involved in the equations of motion of the human body. The regressor matrices are linear relatively to the segment inertial parameters allowing us to use simple sensitivity analysis methods. The sensitivity analysis method was applied over gait dynamics and kinematics data of nine subjects and with a 15 segments 3D model of the locomotor apparatus. According to the proposed sensitivity indices, 76 segments inertial parameters out the 150 of the mechanical model were considered as not influent for gait. The main findings were that the segment masses were influent and that, at the exception of the trunk, moment of inertia were not influent for the computation of the ground reaction forces and moments and the joint moments. The same method also shows numerically that at least 90% of the lower-limb joint moments during the stance phase can be estimated only from a force-plate and kinematics data without knowing any of the segment inertial parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Margin and sensitivity methods for security analysis of electric power systems

    NASA Astrophysics Data System (ADS)

    Greene, Scott L.

    Reliable operation of large scale electric power networks requires that system voltages and currents stay within design limits. Operation beyond those limits can lead to equipment failures and blackouts. Security margins measure the amount by which system loads or power transfers can change before a security violation, such as an overloaded transmission line, is encountered. This thesis shows how to efficiently compute security margins defined by limiting events and instabilities, and the sensitivity of those margins with respect to assumptions, system parameters, operating policy, and transactions. Security margins to voltage collapse blackouts, oscillatory instability, generator limits, voltage constraints and line overloads are considered. The usefulness of computing the sensitivities of these margins with respect to interarea transfers, loading parameters, generator dispatch, transmission line parameters, and VAR support is established for networks as large as 1500 buses. The sensitivity formulas presented apply to a range of power system models. Conventional sensitivity formulas such as line distribution factors, outage distribution factors, participation factors and penalty factors are shown to be special cases of the general sensitivity formulas derived in this thesis. The sensitivity formulas readily accommodate sparse matrix techniques. Margin sensitivity methods are shown to work effectively for avoiding voltage collapse blackouts caused by either saddle node bifurcation of equilibria or immediate instability due to generator reactive power limits. Extremely fast contingency analysis for voltage collapse can be implemented with margin sensitivity based rankings. Interarea transfer can be limited by voltage limits, line limits, or voltage stability. The sensitivity formulas presented in this thesis apply to security margins defined by any limit criteria. A method to compute transfer margins by directly locating intermediate events reduces the total number of loadflow iterations required by each margin computation and provides sensitivity information at minimal additional cost. Estimates of the effect of simultaneous transfers on the transfer margins agree well with the exact computations for a network model derived from a portion of the U.S grid. The accuracy of the estimates over a useful range of conditions and the ease of obtaining the estimates suggest that the sensitivity computations will be of practical value.

  3. Examining the accuracy of the infinite order sudden approximation using sensitivity analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eno, L.; Rabitz, H.

    1981-08-15

    A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix S/sup IOS/ with respect to a parameter which reintroduces the internal energy operator h/sub 0/ into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (h/sub 0/ in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result ismore » obtained for the effect of h/sub 0/ on S/sup IOS/. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H/sub 2/ system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed.« less

  4. Case-Deletion Diagnostics for Maximum Likelihood Multipoint Quantitative Trait Locus Linkage Analysis

    PubMed Central

    Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.

    2009-01-01

    Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086

  5. Single-cell transcriptomics uncovers distinct molecular signatures of stem cells in chronic myeloid leukemia.

    PubMed

    Giustacchini, Alice; Thongjuea, Supat; Barkas, Nikolaos; Woll, Petter S; Povinelli, Benjamin J; Booth, Christopher A G; Sopp, Paul; Norfo, Ruggiero; Rodriguez-Meira, Alba; Ashley, Neil; Jamieson, Lauren; Vyas, Paresh; Anderson, Kristina; Segerstolpe, Åsa; Qian, Hong; Olsson-Strömberg, Ulla; Mustjoki, Satu; Sandberg, Rickard; Jacobsen, Sten Eirik W; Mead, Adam J

    2017-06-01

    Recent advances in single-cell transcriptomics are ideally placed to unravel intratumoral heterogeneity and selective resistance of cancer stem cell (SC) subpopulations to molecularly targeted cancer therapies. However, current single-cell RNA-sequencing approaches lack the sensitivity required to reliably detect somatic mutations. We developed a method that combines high-sensitivity mutation detection with whole-transcriptome analysis of the same single cell. We applied this technique to analyze more than 2,000 SCs from patients with chronic myeloid leukemia (CML) throughout the disease course, revealing heterogeneity of CML-SCs, including the identification of a subgroup of CML-SCs with a distinct molecular signature that selectively persisted during prolonged therapy. Analysis of nonleukemic SCs from patients with CML also provided new insights into cell-extrinsic disruption of hematopoiesis in CML associated with clinical outcome. Furthermore, we used this single-cell approach to identify a blast-crisis-specific SC population, which was also present in a subclone of CML-SCs during the chronic phase in a patient who subsequently developed blast crisis. This approach, which might be broadly applied to any malignancy, illustrates how single-cell analysis can identify subpopulations of therapy-resistant SCs that are not apparent through cell-population analysis.

  6. Development, sensitivity and uncertainty analysis of LASH model

    USDA-ARS?s Scientific Manuscript database

    Many hydrologic models have been developed to help manage natural resources all over the world. Nevertheless, most models have presented a high complexity regarding data base requirements, as well as, many calibration parameters. This has brought serious difficulties for applying them in watersheds ...

  7. Sensitivity analysis of infectious disease models: methods, advances and their application

    PubMed Central

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.

    2013-01-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  8. Pseudotargeted MS Method for the Sensitive Analysis of Protein Phosphorylation in Protein Complexes.

    PubMed

    Lyu, Jiawen; Wang, Yan; Mao, Jiawei; Yao, Yating; Wang, Shujuan; Zheng, Yong; Ye, Mingliang

    2018-05-15

    In this study, we presented an enrichment-free approach for the sensitive analysis of protein phosphorylation in minute amounts of samples, such as purified protein complexes. This method takes advantage of the high sensitivity of parallel reaction monitoring (PRM). Specifically, low confident phosphopeptides identified from the data-dependent acquisition (DDA) data set were used to build a pseudotargeted list for PRM analysis to allow the identification of additional phosphopeptides with high confidence. The development of this targeted approach is very easy as the same sample and the same LC-system were used for the discovery and the targeted analysis phases. No sample fractionation or enrichment was required for the discovery phase which allowed this method to analyze minute amount of sample. We applied this pseudotargeted MS method to quantitatively examine phosphopeptides in affinity purified endogenous Shc1 protein complexes at four temporal stages of EGF signaling and identified 82 phospho-sites. To our knowledge, this is the highest number of phospho-sites identified from the protein complexes. This pseudotargeted MS method is highly sensitive in the identification of low abundance phosphopeptides and could be a powerful tool to study phosphorylation-regulated assembly of protein complex.

  9. Quantile regression in the presence of monotone missingness with sensitivity analysis

    PubMed Central

    Liu, Minzhao; Daniels, Michael J.; Perri, Michael G.

    2016-01-01

    In this paper, we develop methods for longitudinal quantile regression when there is monotone missingness. In particular, we propose pattern mixture models with a constraint that provides a straightforward interpretation of the marginal quantile regression parameters. Our approach allows sensitivity analysis which is an essential component in inference for incomplete data. To facilitate computation of the likelihood, we propose a novel way to obtain analytic forms for the required integrals. We conduct simulations to examine the robustness of our approach to modeling assumptions and compare its performance to competing approaches. The model is applied to data from a recent clinical trial on weight management. PMID:26041008

  10. Comparison between two methodologies for urban drainage decision aid.

    PubMed

    Moura, P M; Baptista, M B; Barraud, S

    2006-01-01

    The objective of the present work is to compare two methodologies based on multicriteria analysis for the evaluation of stormwater systems. The first methodology was developed in Brazil and is based on performance-cost analysis, the second one is ELECTRE III. Both methodologies were applied to a case study. Sensitivity and robustness analyses were then carried out. These analyses demonstrate that both methodologies have equivalent results, and present low sensitivity and high robustness. These results prove that the Brazilian methodology is consistent and can be used safely in order to select a good solution or a small set of good solutions that could be compared with more detailed methods afterwards.

  11. Reconciling uncertain costs and benefits in bayes nets for invasive species management

    USGS Publications Warehouse

    Burgman, M.A.; Wintle, B.A.; Thompson, C.A.; Moilanen, A.; Runge, M.C.; Ben-Haim, Y.

    2010-01-01

    Bayes nets are used increasingly to characterize environmental systems and formalize probabilistic reasoning to support decision making. These networks treat probabilities as exact quantities. Sensitivity analysis can be used to evaluate the importance of assumptions and parameter estimates. Here, we outline an application of info-gap theory to Bayes nets that evaluates the sensitivity of decisions to possibly large errors in the underlying probability estimates and utilities. We apply it to an example of management and eradication of Red Imported Fire Ants in Southern Queensland, Australia and show how changes in management decisions can be justified when uncertainty is considered. ?? 2009 Society for Risk Analysis.

  12. Single photon detection and signal analysis for high sensitivity dosimetry based on optically stimulated luminescence with beryllium oxide

    NASA Astrophysics Data System (ADS)

    Radtke, J.; Sponner, J.; Jakobi, C.; Schneider, J.; Sommer, M.; Teichmann, T.; Ullrich, W.; Henniger, J.; Kormoll, T.

    2018-01-01

    Single photon detection applied to optically stimulated luminescence (OSL) dosimetry is a promising approach due to the low level of luminescence light and the known statistical behavior of single photon events. Time resolved detection allows to apply a variety of different and independent data analysis methods. Furthermore, using amplitude modulated stimulation impresses time- and frequency information into the OSL light and therefore allows for additional means of analysis. Considering the impressed frequency information, data analysis by using Fourier transform algorithms or other digital filters can be used for separating the OSL signal from unwanted light or events generated by other phenomena. This potentially lowers the detection limits of low dose measurements and might improve the reproducibility and stability of obtained data. In this work, an OSL system based on a single photon detector, a fast and accurate stimulation unit and an FPGA is presented. Different analysis algorithms which are applied to the single photon data are discussed.

  13. miR-25 modulates NSCLC cell radio-sensitivity through directly inhibiting BTG2 expression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Zhiwei, E-mail: carlhe@126.com; Liu, Yi, E-mail: cassieliu@126.com; Xiao, Bing, E-mail: rockg714@aliyun.com

    2015-02-13

    A large proportion of the NSCLC patients were insensitive to radiotherapy, but the exact mechanism is still unclear. This study explored the role of miR-25 in regulating sensitivity of NSCLC cells to ionizing radiation (IR) and its downstream targets. Based on measurement in tumor samples from NSCLC patients, this study found that miR-25 expression is upregulated in both NSCLC and radio-resistant NSCLC patients compared the healthy and radio-sensitive controls. In addition, BTG expression was found negatively correlated with miR-25a expression in the both tissues and cells. By applying luciferase reporter assay, we verified two putative binding sites between miR-25 andmore » BTG2. Therefore, BTG2 is a directly target of miR-25 in NSCLC cancer. By applying loss-and-gain function analysis in NSCLC cell lines, we demonstrated that miR-25-BTG2 axis could directly regulated BTG2 expression and affect radiotherapy sensitivity of NSCLC cells. - Highlights: • miR-25 is upregulated, while BTG2 is downregulated in radioresistant NSCLC patients. • miR-25 modulates sensitivity to radiation induced apoptosis. • miR-25 directly targets BTG2 and suppresses its expression. • miR-25 modulates sensitivity to radiotherapy through inhibiting BTG2 expression.« less

  14. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE PAGES

    Dai, Heng; Ye, Ming; Walker, Anthony P.; ...

    2017-03-28

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  15. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  16. Recent advances in chemiluminescence detection coupled with capillary electrophoresis and microchip capillary electrophoresis.

    PubMed

    Liu, Yuxuan; Huang, Xiangyi; Ren, Jicun

    2016-01-01

    CE is an ideal analytical method for extremely volume-limited biological microenvironments. However, the small injection volume makes it a challenge to achieve highly sensitive detection. Chemiluminescence (CL) detection is characterized by providing low background with excellent sensitivity because of requiring no light source. The coupling of CL with CE and MCE has become a powerful analytical method. So far, this method has been widely applied to chemical analysis, bioassay, drug analysis, and environment analysis. In this review, we first introduce some developments for CE-CL and MCE-CL systems, and then put the emphasis on the applications in the last 10 years. Finally, we discuss the future prospects. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. High-resolution melting (HRM) re-analysis of a polyposis patients cohort reveals previously undetected heterozygous and mosaic APC gene mutations.

    PubMed

    Out, Astrid A; van Minderhout, Ivonne J H M; van der Stoep, Nienke; van Bommel, Lysette S R; Kluijt, Irma; Aalfs, Cora; Voorendt, Marsha; Vossen, Rolf H A M; Nielsen, Maartje; Vasen, Hans F A; Morreau, Hans; Devilee, Peter; Tops, Carli M J; Hes, Frederik J

    2015-06-01

    Familial adenomatous polyposis is most frequently caused by pathogenic variants in either the APC gene or the MUTYH gene. The detection rate of pathogenic variants depends on the severity of the phenotype and sensitivity of the screening method, including sensitivity for mosaic variants. For 171 patients with multiple colorectal polyps without previously detectable pathogenic variant, APC was reanalyzed in leukocyte DNA by one uniform technique: high-resolution melting (HRM) analysis. Serial dilution of heterozygous DNA resulted in a lowest detectable allelic fraction of 6% for the majority of variants. HRM analysis and subsequent sequencing detected pathogenic fully heterozygous APC variants in 10 (6%) of the patients and pathogenic mosaic variants in 2 (1%). All these variants were previously missed by various conventional scanning methods. In parallel, HRM APC scanning was applied to DNA isolated from polyp tissue of two additional patients with apparently sporadic polyposis and without detectable pathogenic APC variant in leukocyte DNA. In both patients a pathogenic mosaic APC variant was present in multiple polyps. The detection of pathogenic APC variants in 7% of the patients, including mosaics, illustrates the usefulness of a complete APC gene reanalysis of previously tested patients, by a supplementary scanning method. HRM is a sensitive and fast pre-screening method for reliable detection of heterozygous and mosaic variants, which can be applied to leukocyte and polyp derived DNA.

  18. Methodology for Sensitivity Analysis, Approximate Analysis, and Design Optimization in CFD for Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1996-01-01

    An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.

  19. A GIS based spatially-explicit sensitivity and uncertainty analysis approach for multi-criteria decision analysis.

    PubMed

    Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas

    2014-03-01

    GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.

  20. A GIS based spatially-explicit sensitivity and uncertainty analysis approach for multi-criteria decision analysis

    NASA Astrophysics Data System (ADS)

    Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas

    2014-03-01

    GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.

  1. A novel bi-level meta-analysis approach: applied to biological pathway analysis.

    PubMed

    Nguyen, Tin; Tagett, Rebecca; Donato, Michele; Mitrea, Cristina; Draghici, Sorin

    2016-02-01

    The accumulation of high-throughput data in public repositories creates a pressing need for integrative analysis of multiple datasets from independent experiments. However, study heterogeneity, study bias, outliers and the lack of power of available methods present real challenge in integrating genomic data. One practical drawback of many P-value-based meta-analysis methods, including Fisher's, Stouffer's, minP and maxP, is that they are sensitive to outliers. Another drawback is that, because they perform just one statistical test for each individual experiment, they may not fully exploit the potentially large number of samples within each study. We propose a novel bi-level meta-analysis approach that employs the additive method and the Central Limit Theorem within each individual experiment and also across multiple experiments. We prove that the bi-level framework is robust against bias, less sensitive to outliers than other methods, and more sensitive to small changes in signal. For comparative analysis, we demonstrate that the intra-experiment analysis has more power than the equivalent statistical test performed on a single large experiment. For pathway analysis, we compare the proposed framework versus classical meta-analysis approaches (Fisher's, Stouffer's and the additive method) as well as against a dedicated pathway meta-analysis package (MetaPath), using 1252 samples from 21 datasets related to three human diseases, acute myeloid leukemia (9 datasets), type II diabetes (5 datasets) and Alzheimer's disease (7 datasets). Our framework outperforms its competitors to correctly identify pathways relevant to the phenotypes. The framework is sufficiently general to be applied to any type of statistical meta-analysis. The R scripts are available on demand from the authors. sorin@wayne.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Analysis of Information Content in High-Spectral Resolution Sounders using Subset Selection Analysis

    NASA Technical Reports Server (NTRS)

    Velez-Reyes, Miguel; Joiner, Joanna

    1998-01-01

    In this paper, we summarize the results of the sensitivity analysis and data reduction carried out to determine the information content of AIRS and IASI channels. The analysis and data reduction was based on the use of subset selection techniques developed in the linear algebra and statistical community to study linear dependencies in high dimensional data sets. We applied the subset selection method to study dependency among channels by studying the dependency among their weighting functions. Also, we applied the technique to study the information provided by the different levels in which the atmosphere is discretized for retrievals and analysis. Results from the method correlate well with intuition in many respects and point out to possible modifications for band selection in sensor design and number and location of levels in the analysis process.

  3. Clinical usefulness of the clock drawing test applying rasch analysis in predicting of cognitive impairment.

    PubMed

    Yoo, Doo Han; Lee, Jae Shin

    2016-07-01

    [Purpose] This study examined the clinical usefulness of the clock drawing test applying Rasch analysis for predicting the level of cognitive impairment. [Subjects and Methods] A total of 187 stroke patients with cognitive impairment were enrolled in this study. The 187 patients were evaluated by the clock drawing test developed through Rasch analysis along with the mini-mental state examination of cognitive evaluation tool. An analysis of the variance was performed to examine the significance of the mini-mental state examination and the clock drawing test according to the general characteristics of the subjects. Receiver operating characteristic analysis was performed to determine the cutoff point for cognitive impairment and to calculate the sensitivity and specificity values. [Results] The results of comparison of the clock drawing test with the mini-mental state showed significant differences in according to gender, age, education, and affected side. A total CDT of 10.5, which was selected as the cutoff point to identify cognitive impairement, showed a sensitivity, specificity, Youden index, positive predictive, and negative predicive values of 86.4%, 91.5%, 0.8, 95%, and 88.2%. [Conclusion] The clock drawing test is believed to be useful in assessments and interventions based on its excellent ability to identify cognitive disorders.

  4. A spectral power analysis of driving behavior changes during the transition from nondistraction to distraction.

    PubMed

    Wang, Yuan; Bao, Shan; Du, Wenjun; Ye, Zhirui; Sayer, James R

    2017-11-17

    This article investigated and compared frequency domain and time domain characteristics of drivers' behaviors before and after the start of distracted driving. Data from an existing naturalistic driving study were used. Fast Fourier transform (FFT) was applied for the frequency domain analysis to explore drivers' behavior pattern changes between nondistracted (prestarting of visual-manual task) and distracted (poststarting of visual-manual task) driving periods. Average relative spectral power in a low frequency range (0-0.5 Hz) and the standard deviation in a 10-s time window of vehicle control variables (i.e., lane offset, yaw rate, and acceleration) were calculated and further compared. Sensitivity analyses were also applied to examine the reliability of the time and frequency domain analyses. Results of the mixed model analyses from the time and frequency domain analyses all showed significant degradation in lateral control performance after engaging in visual-manual tasks while driving. Results of the sensitivity analyses suggested that the frequency domain analysis was less sensitive to the frequency bandwidth, whereas the time domain analysis was more sensitive to the time intervals selected for variation calculations. Different time interval selections can result in significantly different standard deviation values, whereas average spectral power analysis on yaw rate in both low and high frequency bandwidths showed consistent results, that higher variation values were observed during distracted driving when compared to nondistracted driving. This study suggests that driver state detection needs to consider the behavior changes during the prestarting periods, instead of only focusing on periods with physical presence of distraction, such as cell phone use. Lateral control measures can be a better indicator of distraction detection than longitudinal controls. In addition, frequency domain analyses proved to be a more robust and consistent method in assessing driving performance compared to time domain analyses.

  5. Sensitivity analysis of consumption cycles

    NASA Astrophysics Data System (ADS)

    Jungeilges, Jochen; Ryazanova, Tatyana; Mitrofanova, Anastasia; Popova, Irina

    2018-05-01

    We study the special case of a nonlinear stochastic consumption model taking the form of a 2-dimensional, non-invertible map with an additive stochastic component. Applying the concept of the stochastic sensitivity function and the related technique of confidence domains, we establish the conditions under which the system's complex consumption attractor is likely to become observable. It is shown that the level of noise intensities beyond which the complex consumption attractor is likely to be observed depends on the weight given to past consumption in an individual's preference adjustment.

  6. Genomic Methods for Clinical and Translational Pain Research

    PubMed Central

    Wang, Dan; Kim, Hyungsuk; Wang, Xiao-Min; Dionne, Raymond

    2012-01-01

    Pain is a complex sensory experience for which the molecular mechanisms are yet to be fully elucidated. Individual differences in pain sensitivity are mediated by a complex network of multiple gene polymorphisms, physiological and psychological processes, and environmental factors. Here, we present the methods for applying unbiased molecular-genetic approaches, genome-wide association study (GWAS), and global gene expression analysis, to help better understand the molecular basis of pain sensitivity in humans and variable responses to analgesic drugs. PMID:22351080

  7. Dynamic sensitivity analysis of biological systems

    PubMed Central

    Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang

    2008-01-01

    Background A mathematical model to understand, predict, control, or even design a real biological system is a central theme in systems biology. A dynamic biological system is always modeled as a nonlinear ordinary differential equation (ODE) system. How to simulate the dynamic behavior and dynamic parameter sensitivities of systems described by ODEs efficiently and accurately is a critical job. In many practical applications, e.g., the fed-batch fermentation systems, the system admissible input (corresponding to independent variables of the system) can be time-dependent. The main difficulty for investigating the dynamic log gains of these systems is the infinite dimension due to the time-dependent input. The classical dynamic sensitivity analysis does not take into account this case for the dynamic log gains. Results We present an algorithm with an adaptive step size control that can be used for computing the solution and dynamic sensitivities of an autonomous ODE system simultaneously. Although our algorithm is one of the decouple direct methods in computing dynamic sensitivities of an ODE system, the step size determined by model equations can be used on the computations of the time profile and dynamic sensitivities with moderate accuracy even when sensitivity equations are more stiff than model equations. To show this algorithm can perform the dynamic sensitivity analysis on very stiff ODE systems with moderate accuracy, it is implemented and applied to two sets of chemical reactions: pyrolysis of ethane and oxidation of formaldehyde. The accuracy of this algorithm is demonstrated by comparing the dynamic parameter sensitivities obtained from this new algorithm and from the direct method with Rosenbrock stiff integrator based on the indirect method. The same dynamic sensitivity analysis was performed on an ethanol fed-batch fermentation system with a time-varying feed rate to evaluate the applicability of the algorithm to realistic models with time-dependent admissible input. Conclusion By combining the accuracy we show with the efficiency of being a decouple direct method, our algorithm is an excellent method for computing dynamic parameter sensitivities in stiff problems. We extend the scope of classical dynamic sensitivity analysis to the investigation of dynamic log gains of models with time-dependent admissible input. PMID:19091016

  8. Exhaled breath condensate – from an analytical point of view

    PubMed Central

    Dodig, Slavica; Čepelak, Ivana

    2013-01-01

    Over the past three decades, the goal of many researchers is analysis of exhaled breath condensate (EBC) as noninvasively obtained sample. A total quality in laboratory diagnostic processes in EBC analysis was investigated: pre-analytical (formation, collection, storage of EBC), analytical (sensitivity of applied methods, standardization) and post-analytical (interpretation of results) phases. EBC analysis is still used as a research tool. Limitations referred to pre-analytical, analytical, and post-analytical phases of EBC analysis are numerous, e.g. low concentrations of EBC constituents, single-analyte methods lack in sensitivity, and multi-analyte has not been fully explored, and reference values are not established. When all, pre-analytical, analytical and post-analytical requirements are met, EBC biomarkers as well as biomarker patterns can be selected and EBC analysis can hopefully be used in clinical practice, in both, the diagnosis and in the longitudinal follow-up of patients, resulting in better outcome of disease. PMID:24266297

  9. Influence of enrichment broths on multiplex PCR detection of total coliform bacteria, Escherichia coli and Clostridium perfringens, in spiked water samples.

    PubMed

    Worakhunpiset, S; Tharnpoophasiam, P

    2009-07-01

    Although multiplex PCR amplification condition for simultaneous detection of total coliform bacteria, Escherichia coli and Clostridium perfringens in water sample has been developed, results with high sensitivity are obtained when amplifying purified DNA, but the sensitivity is low when applied to spiked water samples. An enrichment broth culture prior PCR analysis increases sensitivity of the test but the specific nature of enrichment broth can affect the PCR results. Three enrichment broths, lactose broth, reinforced clostridial medium and fluid thioglycollate broth, were compared for their influence on sensitivity and on time required with multiplex PCR assay. Fluid thioglycollate broth was the most effective with shortest enrichment time and lowest detection limit.

  10. Application of positron annihilation lineshape analysis to fatigue damage and thermal embrittlement for nuclear plant materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uchida, M.; Ohta, Y.; Nakamura, N.

    1995-08-01

    Positron annihilation (PA) lineshape analysis is sensitive to detect microstructural defects such as vacancies and dislocations. The authors are developing a portable system and applying this technique to nuclear power plant material evaluations; fatigue damage in type 316 stainless steel and SA508 low alloy steel, and thermal embrittlement in duplex stainless steel. The PA technique was found to be sensitive in the early fatigue life (up to 10%), but showed a little sensitivity for later stages of the fatigue life in both type 316 stainless steel and SA508 ferritic steel. Type 316 steel showed a higher PA sensitivity than SA508more » since the initial SA508 microstructure already contained a high dislocation density in the as-received state. The PA parameter increased as a fraction of aging time in CF8M samples aged at 350 C and 400 C, but didn`t change much in CF8 samples.« less

  11. Sensitivity analysis of reactive ecological dynamics.

    PubMed

    Verdy, Ariane; Caswell, Hal

    2008-08-01

    Ecological systems with asymptotically stable equilibria may exhibit significant transient dynamics following perturbations. In some cases, these transient dynamics include the possibility of excursions away from the equilibrium before the eventual return; systems that exhibit such amplification of perturbations are called reactive. Reactivity is a common property of ecological systems, and the amplification can be large and long-lasting. The transient response of a reactive ecosystem depends on the parameters of the underlying model. To investigate this dependence, we develop sensitivity analyses for indices of transient dynamics (reactivity, the amplification envelope, and the optimal perturbation) in both continuous- and discrete-time models written in matrix form. The sensitivity calculations require expressions, some of them new, for the derivatives of equilibria, eigenvalues, singular values, and singular vectors, obtained using matrix calculus. Sensitivity analysis provides a quantitative framework for investigating the mechanisms leading to transient growth. We apply the methodology to a predator-prey model and a size-structured food web model. The results suggest predator-driven and prey-driven mechanisms for transient amplification resulting from multispecies interactions.

  12. Parameterization of the InVEST Crop Pollination Model to spatially predict abundance of wild blueberry (Vaccinium angustifolium Aiton) native bee pollinators in Maine, USA

    USGS Publications Warehouse

    Groff, Shannon C.; Loftin, Cynthia S.; Drummond, Frank; Bushmann, Sara; McGill, Brian J.

    2016-01-01

    Non-native honeybees historically have been managed for crop pollination, however, recent population declines draw attention to pollination services provided by native bees. We applied the InVEST Crop Pollination model, developed to predict native bee abundance from habitat resources, in Maine's wild blueberry crop landscape. We evaluated model performance with parameters informed by four approaches: 1) expert opinion; 2) sensitivity analysis; 3) sensitivity analysis informed model optimization; and, 4) simulated annealing (uninformed) model optimization. Uninformed optimization improved model performance by 29% compared to expert opinion-informed model, while sensitivity-analysis informed optimization improved model performance by 54%. This suggests that expert opinion may not result in the best parameter values for the InVEST model. The proportion of deciduous/mixed forest within 2000 m of a blueberry field also reliably predicted native bee abundance in blueberry fields, however, the InVEST model provides an efficient tool to estimate bee abundance beyond the field perimeter.

  13. A sensitive continuum analysis method for gamma ray spectra

    NASA Technical Reports Server (NTRS)

    Thakur, Alakh N.; Arnold, James R.

    1993-01-01

    In this work we examine ways to improve the sensitivity of the analysis procedure for gamma ray spectra with respect to small differences in the continuum (Compton) spectra. The method developed is applied to analyze gamma ray spectra obtained from planetary mapping by the Mars Observer spacecraft launched in September 1992. Calculated Mars simulation spectra and actual thick target bombardment spectra have been taken as test cases. The principle of the method rests on the extraction of continuum information from Fourier transforms of the spectra. We study how a better estimate of the spectrum from larger regions of the Mars surface will improve the analysis for smaller regions with poorer statistics. Estimation of signal within the continuum is done in the frequency domain which enables efficient and sensitive discrimination of subtle differences between two spectra. The process is compared to other methods for the extraction of information from the continuum. Finally we explore briefly the possible uses of this technique in other applications of continuum spectra.

  14. A GIS based spatially-explicit sensitivity and uncertainty analysis approach for multi-criteria decision analysis☆

    PubMed Central

    Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas

    2014-01-01

    GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987

  15. Global Sensitivity Analysis for Process Identification under Model Uncertainty

    NASA Astrophysics Data System (ADS)

    Ye, M.; Dai, H.; Walker, A. P.; Shi, L.; Yang, J.

    2015-12-01

    The environmental system consists of various physical, chemical, and biological processes, and environmental models are always built to simulate these processes and their interactions. For model building, improvement, and validation, it is necessary to identify important processes so that limited resources can be used to better characterize the processes. While global sensitivity analysis has been widely used to identify important processes, the process identification is always based on deterministic process conceptualization that uses a single model for representing a process. However, environmental systems are complex, and it happens often that a single process may be simulated by multiple alternative models. Ignoring the model uncertainty in process identification may lead to biased identification in that identified important processes may not be so in the real world. This study addresses this problem by developing a new method of global sensitivity analysis for process identification. The new method is based on the concept of Sobol sensitivity analysis and model averaging. Similar to the Sobol sensitivity analysis to identify important parameters, our new method evaluates variance change when a process is fixed at its different conceptualizations. The variance considers both parametric and model uncertainty using the method of model averaging. The method is demonstrated using a synthetic study of groundwater modeling that considers recharge process and parameterization process. Each process has two alternative models. Important processes of groundwater flow and transport are evaluated using our new method. The method is mathematically general, and can be applied to a wide range of environmental problems.

  16. Analysis of pressure distortion testing

    NASA Technical Reports Server (NTRS)

    Koch, K. E.; Rees, R. L.

    1976-01-01

    The development of a distortion methodology, method D, was documented, and its application to steady state and unsteady data was demonstrated. Three methodologies based upon DIDENT, a NASA-LeRC distortion methodology based upon the parallel compressor model, were investigated by applying them to a set of steady state data. The best formulation was then applied to an independent data set. The good correlation achieved with this data set showed that method E, one of the above methodologies, is a viable concept. Unsteady data were analyzed by using the method E methodology. This analysis pointed out that the method E sensitivities are functions of pressure defect level as well as corrected speed and pattern.

  17. Statistical analysis of flight times for space shuttle ferry flights

    NASA Technical Reports Server (NTRS)

    Graves, M. E.; Perlmutter, M.

    1974-01-01

    Markov chain and Monte Carlo analysis techniques are applied to the simulated Space Shuttle Orbiter Ferry flights to obtain statistical distributions of flight time duration between Edwards Air Force Base and Kennedy Space Center. The two methods are compared, and are found to be in excellent agreement. The flights are subjected to certain operational and meteorological requirements, or constraints, which cause eastbound and westbound trips to yield different results. Persistence of events theory is applied to the occurrence of inclement conditions to find their effect upon the statistical flight time distribution. In a sensitivity test, some of the constraints are varied to observe the corresponding changes in the results.

  18. Advanced imaging techniques in brain tumors

    PubMed Central

    2009-01-01

    Abstract Perfusion, permeability and magnetic resonance spectroscopy (MRS) are now widely used in the research and clinical settings. In the clinical setting, qualitative, semi-quantitative and quantitative approaches such as review of color-coded maps to region of interest analysis and analysis of signal intensity curves are being applied in practice. There are several pitfalls with all of these approaches. Some of these shortcomings are reviewed, such as the relative low sensitivity of metabolite ratios from MRS and the effect of leakage on the appearance of color-coded maps from dynamic susceptibility contrast (DSC) magnetic resonance (MR) perfusion imaging and what correction and normalization methods can be applied. Combining and applying these different imaging techniques in a multi-parametric algorithmic fashion in the clinical setting can be shown to increase diagnostic specificity and confidence. PMID:19965287

  19. Modeling and simulation of deformation of hydrogels responding to electric stimulus.

    PubMed

    Li, Hua; Luo, Rongmo; Lam, K Y

    2007-01-01

    A model for simulation of pH-sensitive hydrogels is refined in this paper to extend its application to electric-sensitive hydrogels, termed the refined multi-effect-coupling electric-stimulus (rMECe) model. By reformulation of the fixed-charge density and consideration of finite deformation, the rMECe model is able to predict the responsive deformations of the hydrogels when they are immersed in a bath solution subject to externally applied electric field. The rMECe model consists of nonlinear partial differential governing equations with chemo-electro-mechanical coupling effects and the fixed-charge density with electric-field effect. By comparison between simulation and experiment extracted from literature, the model is verified to be accurate and stable. The rMECe model performs quantitatively for deformation analysis of the electric-sensitive hydrogels. The influences of several physical parameters, including the externally applied electric voltage, initial fixed-charge density, hydrogel strip thickness, ionic strength and valence of surrounding solution, are discussed in detail on the displacement and average curvature of the hydrogels.

  20. Application of Sal classification to parotid gland fine-needle aspiration cytology: 10-year retrospective analysis of 312 patients.

    PubMed

    Kilavuz, Ahmet Erdem; Songu, Murat; İmre, Abdulkadir; Arslanoğlu, Secil; Özkul, Yilmaz; Pinar, Ercan; Ateş, Düzgün

    2018-05-01

    The accuracy of fine-needle aspiration biopsy (FNAB) is controversial in parotid tumors. We aimed to compare FNAB results with the final histopathological diagnosis and to apply the "Sal classification" to our data and discuss its results and its place in parotid gland cytology. The FNAB cytological findings and final histological diagnosis were assessed retrospectively in 2 different scenarios based on the distribution of nondefinitive cytology, and we applied the Sal classification and determined malignancy rate, sensitivity, and specificity for each category. In 2 different scenarios FNAB sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were found to be 81%, 87%, 54.7%, and 96.1%; and 65.3%, 100%, 100%, and 96.1%, respectively. The malignancy rates and sensitivity and specificity were also calculated and discussed for each Sal category. We believe that the Sal classification has a great potential to be a useful tool in classification of parotid gland cytology. © 2018 Wiley Periodicals, Inc.

  1. Enhancing the sensitivity of mid-IR quantum cascade laser-based cavity-enhanced absorption spectroscopy using RF current perturbation.

    PubMed

    Manfred, Katherine M; Kirkbride, James M R; Ciaffoni, Luca; Peverall, Robert; Ritchie, Grant A D

    2014-12-15

    The sensitivity of mid-IR quantum cascade laser (QCL) off-axis cavity-enhanced absorption spectroscopy (CEAS), often limited by cavity mode structure and diffraction losses, was enhanced by applying a broadband RF noise to the laser current. A pump-probe measurement demonstrated that the addition of bandwidth-limited white noise effectively increased the laser linewidth, thereby reducing mode structure associated with CEAS. The broadband noise source offers a more sensitive, more robust alternative to applying single-frequency noise to the laser. Analysis of CEAS measurements of a CO(2) absorption feature at 1890  cm(-1) averaged over 100 ms yielded a minimum detectable absorption of 5.5×10(-3)  Hz(-1/2) in the presence of broadband RF perturbation, nearly a tenfold improvement over the unperturbed regime. The short acquisition time makes this technique suitable for breath applications requiring breath-by-breath gas concentration information.

  2. Constrained reduced-order models based on proper orthogonal decomposition

    DOE PAGES

    Reddy, Sohail R.; Freno, Brian Andrew; Cizmas, Paul G. A.; ...

    2017-04-09

    A novel approach is presented to constrain reduced-order models (ROM) based on proper orthogonal decomposition (POD). The Karush–Kuhn–Tucker (KKT) conditions were applied to the traditional reduced-order model to constrain the solution to user-defined bounds. The constrained reduced-order model (C-ROM) was applied and validated against the analytical solution to the first-order wave equation. C-ROM was also applied to the analysis of fluidized beds. Lastly, it was shown that the ROM and C-ROM produced accurate results and that C-ROM was less sensitive to error propagation through time than the ROM.

  3. Supersonic molecular beam-hyperthermal surface ionisation coupled with time-of-flight mass spectrometry applied to trace level detection of polynuclear aromatic hydrocarbons in drinking water for reduced sample preparation and analysis time.

    PubMed

    Davis, S C; Makarov, A A; Hughes, J D

    1999-01-01

    Analysis of sub-ppb levels of polynuclear aromatic hydrocarbons (PAHs) in drinking water by high performance liquid chromatography (HPLC) fluorescence detection typically requires large water samples and lengthy extraction procedures. The detection itself, although selective, does not give compound identity confirmation. Benchtop gas chromatography/mass spectrometry (GC/MS) systems operating in the more sensitive selected ion monitoring (SIM) acquisition mode discard spectral information and, when operating in scanning mode, are less sensitive and scan too slowly. The selectivity of hyperthermal surface ionisation (HSI), the high column flow rate capacity of the supersonic molecular beam (SMB) GC/MS interface, and the high acquisition rate of time-of-flight (TOF) mass analysis, are combined here to facilitate a rapid, specific and sensitive technique for the analysis of trace levels of PAHs in water. This work reports the advantages gained by using the GC/HSI-TOF system over the HPLC fluorescence method, and discusses in some detail the nature of the instrumentation used.

  4. Use of social network analysis and global sensitivity and uncertainty analyses to better understand an influenza outbreak.

    PubMed

    Liu, Jianhua; Jiang, Hongbo; Zhang, Hao; Guo, Chun; Wang, Lei; Yang, Jing; Nie, Shaofa

    2017-06-27

    In the summer of 2014, an influenza A(H3N2) outbreak occurred in Yichang city, Hubei province, China. A retrospective study was conducted to collect and interpret hospital and epidemiological data on it using social network analysis and global sensitivity and uncertainty analyses. Results for degree (χ2=17.6619, P<0.0001) and betweenness(χ2=21.4186, P<0.0001) centrality suggested that the selection of sampling objects were different between traditional epidemiological methods and newer statistical approaches. Clique and network diagrams demonstrated that the outbreak actually consisted of two independent transmission networks. Sensitivity analysis showed that the contact coefficient (k) was the most important factor in the dynamic model. Using uncertainty analysis, we were able to better understand the properties and variations over space and time on the outbreak. We concluded that use of newer approaches were significantly more efficient for managing and controlling infectious diseases outbreaks, as well as saving time and public health resources, and could be widely applied on similar local outbreaks.

  5. Analysis of glycosaminoglycan-derived disaccharides by capillary electrophoresis using laser-induced fluorescence detection

    PubMed Central

    Chang, Yuqing; Yang, Bo; Zhao, Xue; Linhardt, Robert J.

    2012-01-01

    A quantitative and highly sensitive method for the analysis of glycosaminoglycan (GAG)-derived disaccharides is presented that relies on capillary electrophoresis (CE) with laser-induced fluorescence (LIF) detection. This method enables complete separation of seventeen GAG-derived disaccharides in a single run. Unsaturated disaccharides were derivatized with 2-aminoacridone (AMAC) to improve sensitivity. The limit of detection was at the attomole level and about 100-fold more sensitive than traditional CE-ultraviolet detection. A CE separation timetable was developed to achieve complete resolution and shorten analysis time. The RSD of migration time and peak areas at both low and high concentrations of unsaturated disaccharides are all less than 2.7% and 3.2%, respectively, demonstrating that this is a reproducible method. This analysis was successfully applied to cultured Chinese hamster ovary cell samples for determination of GAG disaccharides. The current method simplifies GAG extraction steps, and reduces inaccuracy in calculating ratios of heparin/heparan sulfate to chondroitin sulfate/dermatan sulfate, resulting from the separate analyses of a single sample. PMID:22609076

  6. A fully battery-powered inexpensive spectrophotometric system for high-sensitivity point-of-care analysis on a microfluidic chip

    PubMed Central

    Dou, Maowei; Lopez, Juan; Rios, Misael; Garcia, Oscar; Xiao, Chuan; Eastman, Michael

    2016-01-01

    A cost-effective battery-powered spectrophotometric system (BASS) was developed for quantitative point-of-care (POC) analysis on a microfluidic chip. By using methylene blue as a model analyte, we first compared the performance of the BASS with a commercial spectrophotometric system, and further applied the BASS for loop-mediated isothermal amplification (LAMP) detection and subsequent quantitative nucleic acid analysis which exhibited a comparable limit of detection to that of Nanodrop. Compared to the commercial spectrophotometric system, our spectrophotometric system is lower-cost, consumes less reagents, and has a higher detection sensitivity. Most importantly, it does not rely on external power supplies. All these features make our spectrophotometric system highly suitable for a variety of POC analyses, such as field detection. PMID:27143408

  7. Simulation-based optimization framework for reuse of agricultural drainage water in irrigation.

    PubMed

    Allam, A; Tawfik, A; Yoshimura, C; Fleifle, A

    2016-05-01

    A simulation-based optimization framework for agricultural drainage water (ADW) reuse has been developed through the integration of a water quality model (QUAL2Kw) and a genetic algorithm. This framework was applied to the Gharbia drain in the Nile Delta, Egypt, in summer and winter 2012. First, the water quantity and quality of the drain was simulated using the QUAL2Kw model. Second, uncertainty analysis and sensitivity analysis based on Monte Carlo simulation were performed to assess QUAL2Kw's performance and to identify the most critical variables for determination of water quality, respectively. Finally, a genetic algorithm was applied to maximize the total reuse quantity from seven reuse locations with the condition not to violate the standards for using mixed water in irrigation. The water quality simulations showed that organic matter concentrations are critical management variables in the Gharbia drain. The uncertainty analysis showed the reliability of QUAL2Kw to simulate water quality and quantity along the drain. Furthermore, the sensitivity analysis showed that the 5-day biochemical oxygen demand, chemical oxygen demand, total dissolved solids, total nitrogen and total phosphorous are highly sensitive to point source flow and quality. Additionally, the optimization results revealed that the reuse quantities of ADW can reach 36.3% and 40.4% of the available ADW in the drain during summer and winter, respectively. These quantities meet 30.8% and 29.1% of the drainage basin requirements for fresh irrigation water in the respective seasons. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. A new methodology based on sensitivity analysis to simplify the recalibration of functional-structural plant models in new conditions.

    PubMed

    Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry

    2018-06-19

    Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.

  9. Supraorbital Versus Endoscopic Endonasal Approaches for Olfactory Groove Meningiomas: A Cost-Minimization Study.

    PubMed

    Gandhoke, Gurpreet S; Pease, Matthew; Smith, Kenneth J; Sekula, Raymond F

    2017-09-01

    To perform a cost-minimization study comparing the supraorbital and endoscopic endonasal (EEA) approach with or without craniotomy for the resection of olfactory groove meningiomas (OGMs). We built a decision tree using probabilities of gross total resection (GTR) and cerebrospinal fluid (CSF) leak rates with the supraorbital approach versus EEA with and without additional craniotomy. The cost (not charge or reimbursement) at each "stem" of this decision tree for both surgical options was obtained from our hospital's finance department. After a base case calculation, we applied plausible ranges to all parameters and carried out multiple 1-way sensitivity analyses. Probabilistic sensitivity analyses confirmed our results. The probabilities of GTR (0.8) and CSF leak (0.2) for the supraorbital craniotomy were obtained from our series of 5 patients who underwent a supraorbital approach for the resection of an OGM. The mean tumor volume was 54.6 cm 3 (range, 17-94.2 cm 3 ). Literature-reported rates of GTR (0.6) and CSF leak (0.3) with EEA were applied to our economic analysis. Supraorbital craniotomy was the preferred strategy, with an expected value of $29,423, compared with an EEA cost of $83,838. On multiple 1-way sensitivity analyses, supraorbital craniotomy remained the preferred strategy, with a minimum cost savings of $46,000 and a maximum savings of $64,000. Probabilistic sensitivity analysis found the lowest cost difference between the 2 surgical options to be $37,431. Compared with EEA, supraorbital craniotomy provides substantial cost savings in the treatment of OGMs. Given the potential differences in effectiveness between approaches, a cost-effectiveness analysis should be undertaken. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques

    NASA Astrophysics Data System (ADS)

    Elliott, Louie C.

    This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.

  11. Development of a highly sensitive and specific ELISA method for the determination of l-corydalmine in SD rats with monoclonal antibody.

    PubMed

    Zhang, Hongwei; Gao, Lan; Shu, Menglin; Liu, Jihua; Yu, Boyang

    2018-01-15

    l-Corydalmine (l-CDL) is a potent analgesic constituent of the traditional Chinese medicine, Rhizoma Corydalis. However, the pharmacokinetic process and tissue distribution of l-CDL in vivo are still unknown. Therefore, it is necessary to establish a simple and sensitive method to detect l-CDL, which will be helpful to study its distribution and pharmacokinetic process. To determine this compound in biological samples, a monoclonal antibody (mAb) against l-CDL was produced and a fast and highly sensitive indirect competitive enzyme-linked immunosorbent assay (icELISA) was developed in this study. The icELISA was applied to determine l-CDL in biological samples. The limit of detection (LOD) of the method was 0.015 ng/mL with a liner range of 1-1000 ng/mL (R 2  = 0.9912). The intra- and inter-day precision were below 15% and the recoveries were within 80-117%. Finally, the developed immunoassay was successfully applied to the analysis of the distribution of l-CDL in SD rats. In conclusion, the icELISA based on the anti-l-CDL mAb could be considered as a highly sensitive and rapid method for the determination of l-CDL in biological samples. The ELISA approach may provide a valuable tool for the analysis of small molecules in biological samples. Copyright © 2017. Published by Elsevier B.V.

  12. Basin-scale geothermal model calibration: experience from the Perth Basin, Australia

    NASA Astrophysics Data System (ADS)

    Wellmann, Florian; Reid, Lynn

    2014-05-01

    The calibration of large-scale geothermal models for entire sedimentary basins is challenging as direct measurements of rock properties and subsurface temperatures are commonly scarce and the basal boundary conditions poorly constrained. Instead of the often applied "trial-and-error" manual model calibration, we examine here if we can gain additional insight into parameter sensitivities and model uncertainty with a model analysis and calibration study. Our geothermal model is based on a high-resolution full 3-D geological model, covering an area of more than 100,000 square kilometers and extending to a depth of 55 kilometers. The model contains all major faults (>80 ) and geological units (13) for the entire basin. This geological model is discretised into a rectilinear mesh with a lateral resolution of 500 x 500 m, and a variable resolution at depth. The highest resolution of 25 m is applied to a depth range of 1000-3000 m where most temperature measurements are available. The entire discretised model consists of approximately 50 million cells. The top thermal boundary condition is derived from surface temperature measurements on land and ocean floor. The base of the model extents below the Moho, and we apply the heat flux over the Moho as a basal heat flux boundary condition. Rock properties (thermal conductivity, porosity, and heat production) have been compiled from several existing data sets. The conductive geothermal forward simulation is performed with SHEMAT, and we then use the stand-alone capabilities of iTOUGH2 for sensitivity analysis and model calibration. Simulated temperatures are compared to 130 quality weighted bottom hole temperature measurements. The sensitivity analysis provided a clear insight into the most sensitive parameters and parameter correlations. This proved to be of value as strong correlations, for example between basal heat flux and heat production in deep geological units, can significantly influence the model calibration procedure. The calibration resulted in a better determination of subsurface temperatures, and, in addition, provided an insight into model quality. Furthermore, a detailed analysis of the measurements used for calibration highlighted potential outliers, and limitations with the model assumptions. Extending the previously existing large-scale geothermal simulation with iTOUGH2 provided us with a valuable insight into the sensitive parameters and data in the model, which would clearly not be possible with a simple trial-and-error calibration method. Using the gained knowledge, future work will include more detailed studies on the influence of advection and convection.

  13. A Decision Analysis Framework for Evaluation of Helmet Mounted Display Alternatives for Fighter Aircraft

    DTIC Science & Technology

    2014-12-26

    additive value function, which assumes mutual preferential independence (Gregory S. Parnell, 2013). In other words, this method can be used if the... additive value function method to calculate the aggregate value of multiple objectives. Step 9 : Sensitivity Analysis Once the global values are...gravity metric, the additive method will be applied using equal weights for each axis value function. Pilot Satisfaction (Usability) As expressed

  14. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1993-01-01

    In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.

  15. Diagnostic features of Alzheimer's disease extracted from PET sinograms

    NASA Astrophysics Data System (ADS)

    Sayeed, A.; Petrou, M.; Spyrou, N.; Kadyrov, A.; Spinks, T.

    2002-01-01

    Texture analysis of positron emission tomography (PET) images of the brain is a very difficult task, due to the poor signal to noise ratio. As a consequence, very few techniques can be implemented successfully. We use a new global analysis technique known as the Trace transform triple features. This technique can be applied directly to the raw sinograms to distinguish patients with Alzheimer's disease (AD) from normal volunteers. FDG-PET images of 18 AD and 10 normal controls obtained from the same CTI ECAT-953 scanner were used in this study. The Trace transform triple feature technique was used to extract features that were invariant to scaling, translation and rotation, referred to as invariant features, as well as features that were sensitive to rotation but invariant to scaling and translation, referred to as sensitive features in this study. The features were used to classify the groups using discriminant function analysis. Cross-validation tests using stepwise discriminant function analysis showed that combining both sensitive and invariant features produced the best results, when compared with the clinical diagnosis. Selecting the five best features produces an overall accuracy of 93% with sensitivity of 94% and specificity of 90%. This is comparable with the classification accuracy achieved by Kippenhan et al (1992), using regional metabolic activity.

  16. Receiver operating characteristic analysis of age-related changes in lineup performance.

    PubMed

    Humphries, Joyce E; Flowe, Heather D

    2015-04-01

    In the basic face memory literature, support has been found for the late maturation hypothesis, which holds that face recognition ability is not fully developed until at least adolescence. Support for the late maturation hypothesis in the criminal lineup identification literature, however, has been equivocal because of the analytic approach that has been used to examine age-related changes in identification performance. Recently, receiver operator characteristic (ROC) analysis was applied for the first time in the adult eyewitness memory literature to examine whether memory sensitivity differs across different types of lineup tests. ROC analysis allows for the separation of memory sensitivity from response bias in the analysis of recognition data. Here, we have made the first ROC-based comparison of adults' and children's (5- and 6-year-olds and 9- and 10-year-olds) memory performance on lineups by reanalyzing data from Humphries, Holliday, and Flowe (2012). In line with the late maturation hypothesis, memory sensitivity was significantly greater for adults compared with young children. Memory sensitivity for older children was similar to that for adults. The results indicate that the late maturation hypothesis can be generalized to account for age-related performance differences on an eyewitness memory task. The implications for developmental eyewitness memory research are discussed. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Analysis of the NAEG model of transuranic radionuclide transport and dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kercher, J.R.; Anspaugh, L.R.

    We analyze the model for estimating the dose from /sup 239/Pu developed for the Nevada Applied Ecology Group (NAEG) by using sensitivity analysis and uncertainty analysis. Sensitivity analysis results suggest that the air pathway is the critical pathway for the organs receiving the highest dose. Soil concentration and the factors controlling air concentration are the most important parameters. The only organ whose dose is sensitive to parameters in the ingestion pathway is the GI tract. The air pathway accounts for 100% of the dose to lung, upper respiratory tract, and thoracic lymph nodes; and 95% of its dose via ingestion.more » Leafy vegetable ingestion accounts for 70% of the dose from the ingestion pathway regardless of organ, peeled vegetables 20%; accidental soil ingestion 5%; ingestion of beef liver 4%; beef muscle 1%. Only a handful of model parameters control the dose for any one organ. The number of important parameters is usually less than 10. Uncertainty analysis indicates that choosing a uniform distribution for the input parameters produces a lognormal distribution of the dose. The ratio of the square root of the variance to the mean is three times greater for the doses than it is for the individual parameters. As found by the sensitivity analysis, the uncertainty analysis suggests that only a few parameters control the dose for each organ. All organs have similar distributions and variance to mean ratios except for the lymph modes. 16 references, 9 figures, 13 tables.« less

  18. Sensitivity analysis for axis rotation diagrid structural systems according to brace angle changes

    NASA Astrophysics Data System (ADS)

    Yang, Jae-Kwang; Li, Long-Yang; Park, Sung-Soo

    2017-10-01

    General regular shaped diagrid structures can express diverse shapes because braces are installed along the exterior faces of the structures and the structures have no columns. However, since irregular shaped structures have diverse variables, studies to assess behaviors resulting from various variables are continuously required to supplement the imperfections related to such variables. In the present study, materials elastic modulus and yield strength were selected as variables for strength that would be applied to diagrid structural systems in the form of Twisters among the irregular shaped buildings classified by Vollers and that affect the structural design of these structural systems. The purpose of this study is to conduct sensitivity analysis for axial rotation diagrid structural systems according to changes in brace angles in order to identify the design variables that have relatively larger effects and the tendencies of the sensitivity of the structures according to changes in brace angles and axial rotation angles.

  19. Design and simulation analysis of a novel pressure sensor based on graphene film

    NASA Astrophysics Data System (ADS)

    Nie, M.; Xia, Y. H.; Guo, A. Q.

    2018-02-01

    A novel pressure sensor structure based on graphene film as the sensitive membrane was proposed in this paper, which solved the problem to measure low and minor pressure with high sensitivity. Moreover, the fabrication process was designed which can be compatible with CMOS IC fabrication technology. Finite element analysis has been used to simulate the displacement distribution of the thin movable graphene film of the designed pressure sensor under the different pressures with different dimensions. From the simulation results, the optimized structure has been obtained which can be applied in the low measurement range from 10hPa to 60hPa. The length and thickness of the graphene film could be designed as 100μm and 0.2μm, respectively. The maximum mechanical stress on the edge of the sensitive membrane was 1.84kPa, which was far below the breaking strength of the silicon nitride and graphene film.

  20. Near-surface compressional and shear wave speeds constrained by body-wave polarization analysis

    NASA Astrophysics Data System (ADS)

    Park, Sunyoung; Ishii, Miaki

    2018-06-01

    A new technique to constrain near-surface seismic structure that relates body-wave polarization direction to the wave speed immediately beneath a seismic station is presented. The P-wave polarization direction is only sensitive to shear wave speed but not to compressional wave speed, while the S-wave polarization direction is sensitive to both wave speeds. The technique is applied to data from the High-Sensitivity Seismograph Network in Japan, and the results show that the wave speed estimates obtained from polarization analysis are compatible with those from borehole measurements. The lateral variations in wave speeds correlate with geological and physical features such as topography and volcanoes. The technique requires minimal computation resources, and can be used on any number of three-component teleseismic recordings, opening opportunities for non-invasive and inexpensive study of the shallowest (˜100 m) crustal structures.

  1. Perspective: Optical measurement of feature dimensions and shapes by scatterometry

    NASA Astrophysics Data System (ADS)

    Diebold, Alain C.; Antonelli, Andy; Keller, Nick

    2018-05-01

    The use of optical scattering to measure feature shape and dimensions, scatterometry, is now routine during semiconductor manufacturing. Scatterometry iteratively improves an optical model structure using simulations that are compared to experimental data from an ellipsometer. These simulations are done using the rigorous coupled wave analysis for solving Maxwell's equations. In this article, we describe the Mueller matrix spectroscopic ellipsometry based scatterometry. Next, the rigorous coupled wave analysis for Maxwell's equations is presented. Following this, several example measurements are described as they apply to specific process steps in the fabrication of gate-all-around (GAA) transistor structures. First, simulations of measurement sensitivity for the inner spacer etch back step of horizontal GAA transistor processing are described. Next, the simulated metrology sensitivity for sacrificial (dummy) amorphous silicon etch back step of vertical GAA transistor processing is discussed. Finally, we present the application of plasmonically active test structures for improving the sensitivity of the measurement of metal linewidths.

  2. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, J.; Tolson, B.

    2017-12-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  3. How much crosstalk can be allowed in a stereoscopic system at various grey levels?

    NASA Astrophysics Data System (ADS)

    Shestak, Sergey; Kim, Daesik; Kim, Yongie

    2012-03-01

    We have calculated a perceptual threshold of stereoscopic crosstalk on the basis of mathematical model of human vision sensitivity. Instead of linear model of just noticeable difference (JND) known as Weber's law we applied nonlinear Barten's model. The predicted crosstalk threshold varies with the background luminance. The calculated values of threshold are in a reasonable agreement with known experimental data. We calculated perceptual threshold of crosstalk for various combinations of the applied grey level. This result can be applied for the assessment of grey-to-grey crosstalk compensation. Further computational analysis of the applied model predicts the increase of the displayable image contrast with reduction of the maximum displayable luminance.

  4. Analysis of the stochastic excitability in the flow chemical reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bashkirtseva, Irina

    2015-11-30

    A dynamic model of the thermochemical process in the flow reactor is considered. We study an influence of the random disturbances on the stationary regime of this model. A phenomenon of noise-induced excitability is demonstrated. For the analysis of this phenomenon, a constructive technique based on the stochastic sensitivity functions and confidence domains is applied. It is shown how elaborated technique can be used for the probabilistic analysis of the generation of mixed-mode stochastic oscillations in the flow chemical reactor.

  5. Analysis of the stochastic excitability in the flow chemical reactor

    NASA Astrophysics Data System (ADS)

    Bashkirtseva, Irina

    2015-11-01

    A dynamic model of the thermochemical process in the flow reactor is considered. We study an influence of the random disturbances on the stationary regime of this model. A phenomenon of noise-induced excitability is demonstrated. For the analysis of this phenomenon, a constructive technique based on the stochastic sensitivity functions and confidence domains is applied. It is shown how elaborated technique can be used for the probabilistic analysis of the generation of mixed-mode stochastic oscillations in the flow chemical reactor.

  6. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    NASA Technical Reports Server (NTRS)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  7. Occupancy estimation and the closure assumption

    USGS Publications Warehouse

    Rota, Christopher T.; Fletcher, Robert J.; Dorazio, Robert M.; Betts, Matthew G.

    2009-01-01

    1. Recent advances in occupancy estimation that adjust for imperfect detection have provided substantial improvements over traditional approaches and are receiving considerable use in applied ecology. To estimate and adjust for detectability, occupancy modelling requires multiple surveys at a site and requires the assumption of 'closure' between surveys, i.e. no changes in occupancy between surveys. Violations of this assumption could bias parameter estimates; however, little work has assessed model sensitivity to violations of this assumption or how commonly such violations occur in nature. 2. We apply a modelling procedure that can test for closure to two avian point-count data sets in Montana and New Hampshire, USA, that exemplify time-scales at which closure is often assumed. These data sets illustrate different sampling designs that allow testing for closure but are currently rarely employed in field investigations. Using a simulation study, we then evaluate the sensitivity of parameter estimates to changes in site occupancy and evaluate a power analysis developed for sampling designs that is aimed at limiting the likelihood of closure. 3. Application of our approach to point-count data indicates that habitats may frequently be open to changes in site occupancy at time-scales typical of many occupancy investigations, with 71% and 100% of species investigated in Montana and New Hampshire respectively, showing violation of closure across time periods of 3 weeks and 8 days respectively. 4. Simulations suggest that models assuming closure are sensitive to changes in occupancy. Power analyses further suggest that the modelling procedure we apply can effectively test for closure. 5. Synthesis and applications. Our demonstration that sites may be open to changes in site occupancy over time-scales typical of many occupancy investigations, combined with the sensitivity of models to violations of the closure assumption, highlights the importance of properly addressing the closure assumption in both sampling designs and analysis. Furthermore, inappropriately applying closed models could have negative consequences when monitoring rare or declining species for conservation and management decisions, because violations of closure typically lead to overestimates of the probability of occurrence.

  8. A Small Range Six-Axis Accelerometer Designed with High Sensitivity DCB Elastic Element

    PubMed Central

    Sun, Zhibo; Liu, Jinhao; Yu, Chunzhan; Zheng, Yili

    2016-01-01

    This paper describes a small range six-axis accelerometer (the measurement range of the sensor is ±g) with high sensitivity DCB (Double Cantilever Beam) elastic element. This sensor is developed based on a parallel mechanism because of the reliability. The accuracy of sensors is affected by its sensitivity characteristics. To improve the sensitivity, a DCB structure is applied as the elastic element. Through dynamic analysis, the dynamic model of the accelerometer is established using the Lagrange equation, and the mass matrix and stiffness matrix are obtained by a partial derivative calculation and a conservative congruence transformation, respectively. By simplifying the structure of the accelerometer, a model of the free vibration is achieved, and the parameters of the sensor are designed based on the model. Through stiffness analysis of the DCB structure, the deflection curve of the beam is calculated. Compared with the result obtained using a finite element analysis simulation in ANSYS Workbench, the coincidence rate of the maximum deflection is 89.0% along the x-axis, 88.3% along the y-axis and 87.5% along the z-axis. Through strain analysis of the DCB elastic element, the sensitivity of the beam is obtained. According to the experimental result, the accuracy of the theoretical analysis is found to be 90.4% along the x-axis, 74.9% along the y-axis and 78.9% along the z-axis. The measurement errors of linear accelerations ax, ay and az in the experiments are 2.6%, 0.6% and 1.31%, respectively. The experiments prove that accelerometer with DCB elastic element performs great sensitive and precision characteristics. PMID:27657089

  9. Integrated Droplet-Based Microextraction with ESI-MS for Removal of Matrix Interference in Single-Cell Analysis.

    PubMed

    Zhang, Xiao-Chao; Wei, Zhen-Wei; Gong, Xiao-Yun; Si, Xing-Yu; Zhao, Yao-Yao; Yang, Cheng-Dui; Zhang, Si-Chun; Zhang, Xin-Rong

    2016-04-29

    Integrating droplet-based microfluidics with mass spectrometry is essential to high-throughput and multiple analysis of single cells. Nevertheless, matrix effects such as the interference of culture medium and intracellular components influence the sensitivity and the accuracy of results in single-cell analysis. To resolve this problem, we developed a method that integrated droplet-based microextraction with single-cell mass spectrometry. Specific extraction solvent was used to selectively obtain intracellular components of interest and remove interference of other components. Using this method, UDP-Glc-NAc, GSH, GSSG, AMP, ADP and ATP were successfully detected in single MCF-7 cells. We also applied the method to study the change of unicellular metabolites in the biological process of dysfunctional oxidative phosphorylation. The method could not only realize matrix-free, selective and sensitive detection of metabolites in single cells, but also have the capability for reliable and high-throughput single-cell analysis.

  10. Derivation of Continuum Models from An Agent-based Cancer Model: Optimization and Sensitivity Analysis.

    PubMed

    Voulgarelis, Dimitrios; Velayudhan, Ajoy; Smith, Frank

    2017-01-01

    Agent-based models provide a formidable tool for exploring complex and emergent behaviour of biological systems as well as accurate results but with the drawback of needing a lot of computational power and time for subsequent analysis. On the other hand, equation-based models can more easily be used for complex analysis in a much shorter timescale. This paper formulates an ordinary differential equations and stochastic differential equations model to capture the behaviour of an existing agent-based model of tumour cell reprogramming and applies it to optimization of possible treatment as well as dosage sensitivity analysis. For certain values of the parameter space a close match between the equation-based and agent-based models is achieved. The need for division of labour between the two approaches is explored. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  11. Method of confidence domains in the analysis of noise-induced extinction for tritrophic population system

    NASA Astrophysics Data System (ADS)

    Bashkirtseva, Irina; Ryashko, Lev; Ryazanova, Tatyana

    2017-09-01

    A problem of the analysis of the noise-induced extinction in multidimensional population systems is considered. For the investigation of conditions of the extinction caused by random disturbances, a new approach based on the stochastic sensitivity function technique and confidence domains is suggested, and applied to tritrophic population model of interacting prey, predator and top predator. This approach allows us to analyze constructively the probabilistic mechanisms of the transition to the noise-induced extinction from both equilibrium and oscillatory regimes of coexistence. In this analysis, a method of principal directions for the reducing of the dimension of confidence domains is suggested. In the dispersion of random states, the principal subspace is defined by the ratio of eigenvalues of the stochastic sensitivity matrix. A detailed analysis of two scenarios of the noise-induced extinction in dependence on parameters of considered tritrophic system is carried out.

  12. Diagnostic accuracy of enzyme-linked immunosorbent assay (ELISA) and immunoblot (IB) for the detection of antibodies against Neospora caninum in milk from dairy cows.

    PubMed

    Chatziprodromidou, I P; Apostolou, T

    2018-04-01

    The aim of the study was to estimate the sensitivity and specificity of enzyme-linked immunosorbent assay (ELISA) and immunoblot (IB) for detecting antibodies of Neospora caninum in dairy cows, in the absence of a gold standard. The study complies with STRADAS-paratuberculosis guidelines for reporting the accuracy of the test. We tried to apply Bayesian models that do not require conditional independence of the tests under evaluation, but as convergence problems appeared, we used Bayesian methodology, that does not assume conditional dependence of the tests. Informative prior probability distributions were constructed, based on scientific inputs regarding sensitivity and specificity of the IB test and the prevalence of disease in the studied populations. IB sensitivity and specificity were estimated to be 98.8% and 91.3%, respectively, while the respective estimates for ELISA were 60% and 96.7%. A sensitivity analysis, where modified prior probability distributions concerning IB diagnostic accuracy applied, showed a limited effect in posterior assessments. We concluded that ELISA can be used to screen the bulk milk and secondly, IB can be used whenever needed.

  13. Recognition of central sensitization in patients with musculoskeletal pain: Application of pain neurophysiology in manual therapy practice.

    PubMed

    Nijs, Jo; Van Houdenhove, Boudewijn; Oostendorp, Rob A B

    2010-04-01

    Central sensitization plays an important role in the pathophysiology of numerous musculoskeletal pain disorders, yet it remains unclear how manual therapists can recognize this condition. Therefore, mechanism based clinical guidelines for the recognition of central sensitization in patients with musculoskeletal pain are provided. By using our current understanding of central sensitization during the clinical assessment of patients with musculoskeletal pain, manual therapists can apply the science of nociceptive and pain processing neurophysiology to the practice of manual therapy. The diagnosis/assessment of central sensitization in individual patients with musculoskeletal pain is not straightforward, however manual therapists can use information obtained from the medical diagnosis, combined with the medical history of the patient, as well as the clinical examination and the analysis of the treatment response in order to recognize central sensitization. The clinical examination used to recognize central sensitization entails the distinction between primary and secondary hyperalgesia. Copyright 2009 Elsevier Ltd. All rights reserved.

  14. Decision analysis in clinical cardiology: When is coronary angiography required in aortic stenosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Georgeson, S.; Meyer, K.B.; Pauker, S.G.

    1990-03-15

    Decision analysis offers a reproducible, explicit approach to complex clinical decisions. It consists of developing a model, typically a decision tree, that separates choices from chances and that specifies and assigns relative values to outcomes. Sensitivity analysis allows exploration of alternative assumptions. Cost-effectiveness analysis shows the relation between dollars spent and improved health outcomes achieved. In a tutorial format, this approach is applied to the decision whether to perform coronary angiography in a patient who requires aortic valve replacement for critical aortic stenosis.

  15. Evaluation of FTIR spectroscopy as diagnostic tool for colorectal cancer using spectral analysis

    NASA Astrophysics Data System (ADS)

    Dong, Liu; Sun, Xuejun; Chao, Zhang; Zhang, Shiyun; Zheng, Jianbao; Gurung, Rajendra; Du, Junkai; Shi, Jingsen; Xu, Yizhuang; Zhang, Yuanfu; Wu, Jinguang

    2014-03-01

    The aim of this study is to confirm FTIR spectroscopy as a diagnostic tool for colorectal cancer. 180 freshly removed colorectal samples were collected from 90 patients for spectrum analysis. The ratios of spectral intensity and relative intensity (/I1460) were calculated. Principal component analysis (PCA) and Fisher's discriminant analysis (FDA) were applied to distinguish the malignant from normal. The FTIR parameters of colorectal cancer and normal tissues were distinguished due to the contents or configurations of nucleic acids, proteins, lipids and carbohydrates. Related to nitrogen containing, water, protein and nucleic acid were increased significantly in the malignant group. Six parameters were selected as independent factors to perform discriminant functions. The sensitivity for FTIR in diagnosing colorectal cancer was 96.6% by discriminant analysis. Our study demonstrates that FTIR can be a useful technique for detection of colorectal cancer and may be applied in clinical colorectal cancer diagnosis.

  16. Cost/benefit analysis of advanced materials technology candidates for the 1980's, part 2

    NASA Technical Reports Server (NTRS)

    Dennis, R. E.; Maertins, H. F.

    1980-01-01

    Cost/benefit analyses to evaluate advanced material technologies projects considered for general aviation and turboprop commuter aircraft through estimated life-cycle costs, direct operating costs, and development costs are discussed. Specifically addressed is the selection of technologies to be evaluated; development of property goals; assessment of candidate technologies on typical engines and aircraft; sensitivity analysis of the changes in property goals on performance and economics, cost, and risk analysis for each technology; and ranking of each technology by relative value. The cost/benefit analysis was applied to a domestic, nonrevenue producing, business-type jet aircraft configured with two TFE731-3 turbofan engines, and to a domestic, nonrevenue producing, business type turboprop aircraft configured with two TPE331-10 turboprop engines. In addition, a cost/benefit analysis was applied to a commercial turboprop aircraft configured with a growth version of the TPE331-10.

  17. Gene flow analysis method, the D-statistic, is robust in a wide parameter space.

    PubMed

    Zheng, Yichen; Janke, Axel

    2018-01-08

    We evaluated the sensitivity of the D-statistic, a parsimony-like method widely used to detect gene flow between closely related species. This method has been applied to a variety of taxa with a wide range of divergence times. However, its parameter space and thus its applicability to a wide taxonomic range has not been systematically studied. Divergence time, population size, time of gene flow, distance of outgroup and number of loci were examined in a sensitivity analysis. The sensitivity study shows that the primary determinant of the D-statistic is the relative population size, i.e. the population size scaled by the number of generations since divergence. This is consistent with the fact that the main confounding factor in gene flow detection is incomplete lineage sorting by diluting the signal. The sensitivity of the D-statistic is also affected by the direction of gene flow, size and number of loci. In addition, we examined the ability of the f-statistics, [Formula: see text] and [Formula: see text], to estimate the fraction of a genome affected by gene flow; while these statistics are difficult to implement to practical questions in biology due to lack of knowledge of when the gene flow happened, they can be used to compare datasets with identical or similar demographic background. The D-statistic, as a method to detect gene flow, is robust against a wide range of genetic distances (divergence times) but it is sensitive to population size. The D-statistic should only be applied with critical reservation to taxa where population sizes are large relative to branch lengths in generations.

  18. Computational aspects of sensitivity calculations in linear transient structural analysis. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1990-01-01

    A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.

  19. Automated Optimization of Potential Parameters

    PubMed Central

    Michele, Di Pierro; Ron, Elber

    2013-01-01

    An algorithm and software to refine parameters of empirical energy functions according to condensed phase experimental measurements are discussed. The algorithm is based on sensitivity analysis and local minimization of the differences between experiment and simulation as a function of potential parameters. It is illustrated for a toy problem of alanine dipeptide and is applied to folding of the peptide WAAAH. The helix fraction is highly sensitive to the potential parameters while the slope of the melting curve is not. The sensitivity variations make it difficult to satisfy both observations simultaneously. We conjecture that there is no set of parameters that reproduces experimental melting curves of short peptides that are modeled with the usual functional form of a force field. PMID:24015115

  20. Adjoint equations and analysis of complex systems: Application to virus infection modelling

    NASA Astrophysics Data System (ADS)

    Marchuk, G. I.; Shutyaev, V.; Bocharov, G.

    2005-12-01

    Recent development of applied mathematics is characterized by ever increasing attempts to apply the modelling and computational approaches across various areas of the life sciences. The need for a rigorous analysis of the complex system dynamics in immunology has been recognized since more than three decades ago. The aim of the present paper is to draw attention to the method of adjoint equations. The methodology enables to obtain information about physical processes and examine the sensitivity of complex dynamical systems. This provides a basis for a better understanding of the causal relationships between the immune system's performance and its parameters and helps to improve the experimental design in the solution of applied problems. We show how the adjoint equations can be used to explain the changes in hepatitis B virus infection dynamics between individual patients.

  1. Gait cycle analysis: parameters sensitive for functional evaluation of peripheral nerve recovery in rat hind limbs.

    PubMed

    Rui, Jing; Runge, M Brett; Spinner, Robert J; Yaszemski, Michael J; Windebank, Anthony J; Wang, Huan

    2014-10-01

    Video-assisted gait kinetics analysis has been a sensitive method to assess rat sciatic nerve function after injury and repair. However, in conduit repair of sciatic nerve defects, previously reported kinematic measurements failed to be a sensitive indicator because of the inferior recovery and inevitable joint contracture. This study aimed to explore the role of physiotherapy in mitigating joint contracture and to seek motion analysis indices that can sensitively reflect motor function. Data were collected from 26 rats that underwent sciatic nerve transection and conduit repair. Regular postoperative physiotherapy was applied. Parameters regarding step length, phase duration, and ankle angle were acquired and analyzed from video recording of gait kinetics preoperatively and at regular postoperative intervals. Stride length ratio (step length of uninjured foot/step length of injured foot), percent swing of the normal paw (percentage of the total stride duration when the uninjured paw is in the air), propulsion angle (toe-off angle subtracted by midstance angle), and clearance angle (ankle angle change from toe off to midswing) decreased postoperatively comparing with baseline values. The gradual recovery of these measurements had a strong correlation with the post-nerve repair time course. Ankle joint contracture persisted despite rigorous physiotherapy. Parameters acquired from a 2-dimensional motion analysis system, that is, stride length ratio, percent swing of the normal paw, propulsion angle, and clearance angle, could sensitively reflect nerve function impairment and recovery in the rat sciatic nerve conduit repair model despite the existence of joint contractures.

  2. Optimized and validated flow-injection spectrophotometric analysis of topiramate, piracetam and levetiracetam in pharmaceutical formulations.

    PubMed

    Hadad, Ghada M; Abdel-Salam, Randa A; Emara, Samy

    2011-12-01

    Application of a sensitive and rapid flow injection analysis (FIA) method for determination of topiramate, piracetam, and levetiracetam in pharmaceutical formulations has been investigated. The method is based on the reaction with ortho-phtalaldehyde and 2-mercaptoethanol in a basic buffer and measurement of absorbance at 295 nm under flow conditions. Variables affecting the determination such as sample injection volume, pH, ionic strength, reagent concentrations, flow rate of reagent and other FIA parameters were optimized to produce the most sensitive and reproducible results using a quarter-fraction factorial design, for five factors at two levels. Also, the method has been optimized and fully validated in terms of linearity and range, limit of detection and quantitation, precision, selectivity and accuracy. The method was successfully applied to the analysis of pharmaceutical preparations.

  3. Determination of methylamines in air using activated charcoal traps and gas chromatographic analysis with an alkali flame detector (AFD)

    NASA Astrophysics Data System (ADS)

    Fuselli, Sergio; Benedetti, Giorgio; Mastrangeli, Renato

    A method is described for trapping and analysing airborne methylamines (MMA, DMA and TMA) by means of a 20/35 mesh activated charcoal traps and subsequent GLSC analysis of collected sample using 0.1 N NaOH acqueous solution. The method described may be applied to monitoring methylamines in air in industrial areas, with an Alkali Flame Detector; sensitivities of approx. 0.005 ppmv for each of the three methylamines analysed are reached. Trapping efficiency is compared with that of Tenax GC 60/80 mesh and 60/80 Carbopack B which uses thermal desorption of air samples before GLSC analysis. The Tenax GC trap method enables TMA recovery only with a sensitivity of 0.0001 ppmv. Recovery obtained with 60/80 Carbopack B traps is practically zero.

  4. Sensitivity curves for searches for gravitational-wave backgrounds

    NASA Astrophysics Data System (ADS)

    Thrane, Eric; Romano, Joseph D.

    2013-12-01

    We propose a graphical representation of detector sensitivity curves for stochastic gravitational-wave backgrounds that takes into account the increase in sensitivity that comes from integrating over frequency in addition to integrating over time. This method is valid for backgrounds that have a power-law spectrum in the analysis band. We call these graphs “power-law integrated curves.” For simplicity, we consider cross-correlation searches for unpolarized and isotropic stochastic backgrounds using two or more detectors. We apply our method to construct power-law integrated sensitivity curves for second-generation ground-based detectors such as Advanced LIGO, space-based detectors such as LISA and the Big Bang Observer, and timing residuals from a pulsar timing array. The code used to produce these plots is available at https://dcc.ligo.org/LIGO-P1300115/public for researchers interested in constructing similar sensitivity curves.

  5. Sensitivity analysis of navy aviation readiness based sparing model

    DTIC Science & Technology

    2017-09-01

    variability. (See Figure 4.) Figure 4. Research design flowchart 18 Figure 4 lays out the four steps of the methodology , starting in the upper left-hand...as a function of changes in key inputs. We develop NAVARM Experimental Designs (NED), a computational tool created by applying a state-of-the-art...experimental design to the NAVARM model. Statistical analysis of the resulting data identifies the most influential cost factors. Those are, in order of

  6. Spectrophotometric studies of reactions between pseudo-ephedrine with different inorganic and organic reagents and its micro-determination in pure and in pharmaceutical preparations

    NASA Astrophysics Data System (ADS)

    Zayed, M. A.; El-Rasheedy, El-Gazy A.

    2012-03-01

    Two simple, sensitive, cheep and reliable spectrophotometric methods are suggested for micro-determination of pseudoephedrine in its pure form and in pharmaceutical preparation (Sinofree Tablets). The first one depends on the drug reaction with inorganic sensitive reagent like molybdate anion in aqueous media via formation of ion-pair mechanism. The second one depends on the drug reaction with π-acceptor reagent like DDQ in non-aqueous media via formation of charge transfer complex. These reactions were studied under various conditions and the optimum parameters were selected. Under proper conditions the suggested procedures were successfully applied for micro-determination of pseudoephedrine in pure and in Sinofree Tablets without interference from excepients. The values of SD, RSD, recovery %, LOD, LOQ and Sandell sensitivity refer to the high accuracy and precession of the applied procedures. The results obtained were compared with the data obtained by an official method, referring to confidence and agreement with DDQ procedure results; but it referred to the more accuracy of the molybdate data. Therefore, the suggested procedures are now successfully being applied in routine analysis of this drug in its pharmaceutical formulation (Sinofree) in Saudi Arabian Pharmaceutical Company (SPIMACO) in Boridah El-Qaseem, Saudi Arabia instead of imported kits had been previously used.

  7. Application of humidity-controlled dynamic mechanical analysis (DMA-RH) to moisture-sensitive edible casein films for use in food packaging

    USDA-ARS?s Scientific Manuscript database

    Protein-based and other hydrophilic thin films are promising materials for the manufacture of edible food packaging and other food and non-food applications. Calcium caseinate (CaCas) films are highly hygroscopic and physical characterization under broad environmental conditions is critical to appli...

  8. Renewable Energy Deployment in Colorado and the West: Extended Policy Sensitivities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrows, Clayton P.; Stoll, Brady; Mooney, Meghan E.

    The Resource Planning Model is a capacity expansion model designed for a regional power system, such as a utility service territory, state, or balancing authority. We apply a geospatial analysis to Resource Planning Model renewable energy capacity expansion results to understand the likelihood of renewable development on various lands within Colorado.

  9. Analysis of the Sensitivity and Uncertainty in 2-Stage Clonal Growth Models for Formaldehyde with Relevance to Other Biologically-Based Dose Response (BBDR) Models

    EPA Science Inventory

    The National Center for Environmental Assessment (NCEA) has conducted and supported research addressing uncertainties in 2-stage clonal growth models for cancer as applied to formaldehyde. In this report, we summarized publications resulting from this research effort, discussed t...

  10. Evaluation of a diagnostic flow chart applying medical thoracoscopy, adenosine deaminase and T-SPOT.TB in diagnosis of tuberculous pleural effusion.

    PubMed

    He, Y; Zhang, W; Huang, T; Wang, X; Wang, M

    2015-10-01

    To evaluate a diagnostic flow chart applying medical thoracoscoy (MT), adenosine deaminase (ADA) and T-SPOT.TB in diagnosis of tuberculous pleural effusion (TPE) at a high TB burden country. 136 patients with pleural effusion (PE) were enrolled and divided into TPE and Non-TPE group. MT (histology), PE ADA and T-SPOT.TB were conducted on all patients. ROC analysis was performed for the best cut-off value of PE ADA in detection of TPE. The diagnostic flow chart applying MT, ADA and T-SPOT.TB was evaluated for improving the limitations of each diagnostic method. ROC analysis showed that the best cut-off value of PE ADA was 30U/L. The sensitivity and specificity of these tests were calculated respectively to be: 71.4% (58.5%-81.6%) and 100% (95.4-100.0%) for MT, 92.9% (83.0-97.2%) and 68.8% (57.9-77.9%) for T-SPOT.TB, and 80.0% (69.6-88.1%) and 92.9% (82.7-98.0%) for PE ADA. The sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, positive predictive value and negative predictive value of the diagnostic flow chart were 96.4% (87.9-99.0%), 96.3% (89.6-98.7%), 25.714, 0.037, 97.4 and 94.9, respectively. The diagnostic flow chart applying MT, ADA and T-SPOT.TB is an accurate and rapid diagnostic method in detection of TPE.

  11. Detection of nasopharyngeal cancer using confocal Raman spectroscopy and genetic algorithm technique

    NASA Astrophysics Data System (ADS)

    Li, Shao-Xin; Chen, Qiu-Yan; Zhang, Yan-Jiao; Liu, Zhi-Ming; Xiong, Hong-Lian; Guo, Zhou-Yi; Mai, Hai-Qiang; Liu, Song-Hao

    2012-12-01

    Raman spectroscopy (RS) and a genetic algorithm (GA) were applied to distinguish nasopharyngeal cancer (NPC) from normal nasopharyngeal tissue. A total of 225 Raman spectra are acquired from 120 tissue sites of 63 nasopharyngeal patients, 56 Raman spectra from normal tissue and 169 Raman spectra from NPC tissue. The GA integrated with linear discriminant analysis (LDA) is developed to differentiate NPC and normal tissue according to spectral variables in the selected regions of 792-805, 867-880, 996-1009, 1086-1099, 1288-1304, 1663-1670, and 1742-1752 cm-1 related to proteins, nucleic acids and lipids of tissue. The GA-LDA algorithms with the leave-one-out cross-validation method provide a sensitivity of 69.2% and specificity of 100%. The results are better than that of principal component analysis which is applied to the same Raman dataset of nasopharyngeal tissue with a sensitivity of 63.3% and specificity of 94.6%. This demonstrates that Raman spectroscopy associated with GA-LDA diagnostic algorithm has enormous potential to detect and diagnose nasopharyngeal cancer.

  12. Large Amplitude Oscillatory Shear (LAOS) of Acrylic Emulsion-Based Pressure Sensitive Adhesives (PSAs)

    NASA Astrophysics Data System (ADS)

    Zhang, Sipei; Nakatani, Alan; Griffith, William

    Large Amplitude Oscillatory Shear (LAOS) testing has recently taken on renewed interest in the rheological community. It is a very useful tool to probe the viscoelastic response of materials in the non-linear regime. Much of the discussion on polymers in the LAOS field has focused on melts in or near the terminal flow regime. Here we present a LAOS study conducted on a commercial rheometer for acrylic emulsion-based pressure sensitive adhesive (PSA) films in the plateau regime. The films behaved qualitatively similar over an oscillation frequency range of 0.5-5 rad/s. From Fourier transform analysis, the fifth or even the seventh order harmonic could be observed at large applied strains. From stress decomposition analysis or Lissajous curves, inter-cycle elastic softening, or type I behavior, was observed for all films as the strain increases, while intra-cycle strain hardening occurred at strains in the LAOS regime. Overall, as acid content increases, it was found that the trend in elasticity under large applied strains agreed very well with the trend in cohesive strength of the films.

  13. Highly sensitive protein detection by combination of atomic force microscopy fishing with charge generation and mass spectrometry analysis.

    PubMed

    Ivanov, Yuri D; Pleshakova, Tatyana; Malsagova, Krystina; Kozlov, Andrey; Kaysheva, Anna; Kopylov, Arthur; Izotov, Alexander; Andreeva, Elena; Kanashenko, Sergey; Usanov, Sergey; Archakov, Alexander

    2014-10-01

    An approach combining atomic force microscopy (AFM) fishing and mass spectrometry (MS) analysis to detect proteins at ultra-low concentrations is proposed. Fishing out protein molecules onto a highly oriented pyrolytic graphite surface coated with polytetrafluoroethylene film was carried out with and without application of an external electric field. After that they were visualized by AFM and identified by MS. It was found that injection of solution leads to charge generation in the solution, and an electric potential within the measuring cell is induced. It was demonstrated that without an external electric field in the rapid injection input of diluted protein solution the fishing is efficient, as opposed to slow fluid input. The high sensitivity of this method was demonstrated by detection of human serum albumin and human cytochrome b5 in 10(-17) -10(-18) m water solutions. It was shown that an external negative voltage applied to highly oriented pyrolytic graphite hinders the protein fishing. The efficiency of fishing with an external positive voltage was similar to that obtained without applying any voltage. © 2014 FEBS.

  14. A single use electrochemical sensor based on biomimetic nanoceria for the detection of wine antioxidants.

    PubMed

    Andrei, Veronica; Sharpe, Erica; Vasilescu, Alina; Andreescu, Silvana

    2016-08-15

    We report the development and characterization of a disposable single use electrochemical sensor based on the oxidase-like activity of nanoceria particles for the detection of phenolic antioxidants. The use of nanoceria in the sensor design enables oxidation of phenolic compounds, particularly those with ortho-dihydroxybenzene functionality, to their corresponding quinones at the surface of a screen printed carbon electrode. Detection is carried out by electrochemical reduction of the resulting quinone at a low applied potential of -0.1V vs the Ag/AgCl electrode. The sensor was optimized and characterized with respect to particle loading, applied potential, response time, detection limit, linear concentration range and sensitivity. The method enabled rapid detection of common phenolic antioxidants including caffeic acid, gallic acid and quercetin in the µM concentration range, and demonstrated good functionality for the analysis of antioxidant content in several wine samples. The intrinsic oxidase-like activity of nanoceria shows promise as a robust tool for sensitive and cost effective analysis of antioxidants using electrochemical detection. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Interval analysis of interictal EEG: pathology of the alpha rhythm in focal epilepsy

    NASA Astrophysics Data System (ADS)

    Pyrzowski, Jan; Siemiński, Mariusz; Sarnowska, Anna; Jedrzejczak, Joanna; Nyka, Walenty M.

    2015-11-01

    The contemporary use of interictal scalp electroencephalography (EEG) in the context of focal epilepsy workup relies on the visual identification of interictal epileptiform discharges. The high-specificity performance of this marker comes, however, at a cost of only moderate sensitivity. Zero-crossing interval analysis is an alternative to Fourier analysis for the assessment of the rhythmic component of EEG signals. We applied this method to standard EEG recordings of 78 patients divided into 4 subgroups: temporal lobe epilepsy (TLE), frontal lobe epilepsy (FLE), psychogenic nonepileptic seizures (PNES) and nonepileptic patients with headache. Interval-analysis based markers were capable of effectively discriminating patients with epilepsy from those in control subgroups (AUC~0.8) with diagnostic sensitivity potentially exceeding that of visual analysis. The identified putative epilepsy-specific markers were sensitive to the properties of the alpha rhythm and displayed weak or non-significant dependences on the number of antiepileptic drugs (AEDs) taken by the patients. Significant AED-related effects were concentrated in the theta interval range and an associated marker allowed for identification of patients on AED polytherapy (AUC~0.9). Interval analysis may thus, in perspective, increase the diagnostic yield of interictal scalp EEG. Our findings point to the possible existence of alpha rhythm abnormalities in patients with epilepsy.

  16. Can feedback analysis be used to uncover the physical origin of climate sensitivity and efficacy differences?

    NASA Astrophysics Data System (ADS)

    Rieger, Vanessa S.; Dietmüller, Simone; Ponater, Michael

    2017-10-01

    Different strengths and types of radiative forcings cause variations in the climate sensitivities and efficacies. To relate these changes to their physical origin, this study tests whether a feedback analysis is a suitable approach. For this end, we apply the partial radiative perturbation method. Combining the forward and backward calculation turns out to be indispensable to ensure the additivity of feedbacks and to yield a closed forcing-feedback-balance at top of the atmosphere. For a set of CO2-forced simulations, the climate sensitivity changes with increasing forcing. The albedo, cloud and combined water vapour and lapse rate feedback are found to be responsible for the variations in the climate sensitivity. An O3-forced simulation (induced by enhanced NOx and CO surface emissions) causes a smaller efficacy than a CO2-forced simulation with a similar magnitude of forcing. We find that the Planck, albedo and most likely the cloud feedback are responsible for this effect. Reducing the radiative forcing impedes the statistical separability of feedbacks. We additionally discuss formal inconsistencies between the common ways of comparing climate sensitivities and feedbacks. Moreover, methodical recommendations for future work are given.

  17. Critical factors determining the quantification capability of matrix-assisted laser desorption/ionization– time-of-flight mass spectrometry

    PubMed Central

    Wang, Chia-Chen; Lai, Yin-Hung; Ou, Yu-Meng; Chang, Huan-Tsung; Wang, Yi-Sheng

    2016-01-01

    Quantitative analysis with mass spectrometry (MS) is important but challenging. Matrix-assisted laser desorption/ionization (MALDI) coupled with time-of-flight (TOF) MS offers superior sensitivity, resolution and speed, but such techniques have numerous disadvantages that hinder quantitative analyses. This review summarizes essential obstacles to analyte quantification with MALDI-TOF MS, including the complex ionization mechanism of MALDI, sensitive characteristics of the applied electric fields and the mass-dependent detection efficiency of ion detectors. General quantitative ionization and desorption interpretations of ion production are described. Important instrument parameters and available methods of MALDI-TOF MS used for quantitative analysis are also reviewed. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644968

  18. Sensitivity analysis of conservative and reactive stream transient storage models applied to field data from multiple-reach experiments

    USGS Publications Warehouse

    Gooseff, M.N.; Bencala, K.E.; Scott, D.T.; Runkel, R.L.; McKnight, Diane M.

    2005-01-01

    The transient storage model (TSM) has been widely used in studies of stream solute transport and fate, with an increasing emphasis on reactive solute transport. In this study we perform sensitivity analyses of a conservative TSM and two different reactive solute transport models (RSTM), one that includes first-order decay in the stream and the storage zone, and a second that considers sorption of a reactive solute on streambed sediments. Two previously analyzed data sets are examined with a focus on the reliability of these RSTMs in characterizing stream and storage zone solute reactions. Sensitivities of simulations to parameters within and among reaches, parameter coefficients of variation, and correlation coefficients are computed and analyzed. Our results indicate that (1) simulated values have the greatest sensitivity to parameters within the same reach, (2) simulated values are also sensitive to parameters in reaches immediately upstream and downstream (inter-reach sensitivity), (3) simulated values have decreasing sensitivity to parameters in reaches farther downstream, and (4) in-stream reactive solute data provide adequate data to resolve effective storage zone reaction parameters, given the model formulations. Simulations of reactive solutes are shown to be equally sensitive to transport parameters and effective reaction parameters of the model, evidence of the control of physical transport on reactive solute dynamics. Similar to conservative transport analysis, reactive solute simulations appear to be most sensitive to data collected during the rising and falling limb of the concentration breakthrough curve. ?? 2005 Elsevier Ltd. All rights reserved.

  19. A highly sensitive method for analysis of 7-dehydrocholesterol for the study of Smith-Lemli-Opitz syndrome[S

    PubMed Central

    Liu, Wei; Xu, Libin; Lamberson, Connor; Haas, Dorothea; Korade, Zeljka; Porter, Ned A.

    2014-01-01

    We describe a highly sensitive method for the detection of 7-dehydrocholesterol (7-DHC), the biosynthetic precursor of cholesterol, based on its reactivity with 4-phenyl-1,2,4-triazoline-3,5-dione (PTAD) in a Diels-Alder cycloaddition reaction. Samples of biological tissues and fluids with added deuterium-labeled internal standards were derivatized with PTAD and analyzed by LC-MS. This protocol permits fast processing of samples, short chromatography times, and high sensitivity. We applied this method to the analysis of cells, blood, and tissues from several sources, including human plasma. Another innovative aspect of this study is that it provides a reliable and highly reproducible measurement of 7-DHC in 7-dehydrocholesterol reductase (Dhcr7)-HET mouse (a model for Smith-Lemli-Opitz syndrome) samples, showing regional differences in the brain tissue. We found that the levels of 7-DHC are consistently higher in Dhcr7-HET mice than in controls, with the spinal cord and peripheral nerve showing the biggest differences. In addition to 7-DHC, sensitive analysis of desmosterol in tissues and blood was also accomplished with this PTAD method by assaying adducts formed from the PTAD “ene” reaction. The method reported here may provide a highly sensitive and high throughput way to identify at-risk populations having errors in cholesterol biosynthesis. PMID:24259532

  20. Lunar Exploration Architecture Level Key Drivers and Sensitivities

    NASA Technical Reports Server (NTRS)

    Goodliff, Kandyce; Cirillo, William; Earle, Kevin; Reeves, J. D.; Shyface, Hilary; Andraschko, Mark; Merrill, R. Gabe; Stromgren, Chel; Cirillo, Christopher

    2009-01-01

    Strategic level analysis of the integrated behavior of lunar transportation and lunar surface systems architecture options is performed to assess the benefit, viability, affordability, and robustness of system design choices. This analysis employs both deterministic and probabilistic modeling techniques so that the extent of potential future uncertainties associated with each option are properly characterized. The results of these analyses are summarized in a predefined set of high-level Figures of Merit (FOMs) so as to provide senior NASA Constellation Program (CxP) and Exploration Systems Mission Directorate (ESMD) management with pertinent information to better inform strategic level decision making. The strategic level exploration architecture model is designed to perform analysis at as high a level as possible but still capture those details that have major impacts on system performance. The strategic analysis methodology focuses on integrated performance, affordability, and risk analysis, and captures the linkages and feedbacks between these three areas. Each of these results leads into the determination of the high-level FOMs. This strategic level analysis methodology has been previously applied to Space Shuttle and International Space Station assessments and is now being applied to the development of the Constellation Program point-of-departure lunar architecture. This paper provides an overview of the strategic analysis methodology and the lunar exploration architecture analyses to date. In studying these analysis results, the strategic analysis team has identified and characterized key drivers affecting the integrated architecture behavior. These key drivers include inclusion of a cargo lander, mission rate, mission location, fixed-versus- variable costs/return on investment, and the requirement for probabilistic analysis. Results of sensitivity analysis performed on lunar exploration architecture scenarios are also presented.

  1. Quantifying the importance of spatial resolution and other factors through global sensitivity analysis of a flood inundation model

    NASA Astrophysics Data System (ADS)

    Thomas Steven Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2016-11-01

    Where high-resolution topographic data are available, modelers are faced with the decision of whether it is better to spend computational resource on resolving topography at finer resolutions or on running more simulations to account for various uncertain input factors (e.g., model parameters). In this paper we apply global sensitivity analysis to explore how influential the choice of spatial resolution is when compared to uncertainties in the Manning's friction coefficient parameters, the inflow hydrograph, and those stemming from the coarsening of topographic data used to produce Digital Elevation Models (DEMs). We apply the hydraulic model LISFLOOD-FP to produce several temporally and spatially variable model outputs that represent different aspects of flood inundation processes, including flood extent, water depth, and time of inundation. We find that the most influential input factor for flood extent predictions changes during the flood event, starting with the inflow hydrograph during the rising limb before switching to the channel friction parameter during peak flood inundation, and finally to the floodplain friction parameter during the drying phase of the flood event. Spatial resolution and uncertainty introduced by resampling topographic data to coarser resolutions are much more important for water depth predictions, which are also sensitive to different input factors spatially and temporally. Our findings indicate that the sensitivity of LISFLOOD-FP predictions is more complex than previously thought. Consequently, the input factors that modelers should prioritize will differ depending on the model output assessed, and the location and time of when and where this output is most relevant.

  2. Bayesian sensitivity analysis of bifurcating nonlinear models

    NASA Astrophysics Data System (ADS)

    Becker, W.; Worden, K.; Rowson, J.

    2013-01-01

    Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.

  3. Hierarchical Nanogold Labels to Improve the Sensitivity of Lateral Flow Immunoassay

    NASA Astrophysics Data System (ADS)

    Serebrennikova, Kseniya; Samsonova, Jeanne; Osipov, Alexander

    2018-06-01

    Lateral flow immunoassay (LFIA) is a widely used express method and offers advantages such as a short analysis time, simplicity of testing and result evaluation. However, an LFIA based on gold nanospheres lacks the desired sensitivity, thereby limiting its wide applications. In this study, spherical nanogold labels along with new types of nanogold labels such as gold nanopopcorns and nanostars were prepared, characterized, and applied for LFIA of model protein antigen procalcitonin. It was found that the label with a structure close to spherical provided more uniform distribution of specific antibodies on its surface, indicative of its suitability for this type of analysis. LFIA using gold nanopopcorns as a label allowed procalcitonin detection over a linear range of 0.5-10 ng mL-1 with the limit of detection of 0.1 ng mL-1, which was fivefold higher than the sensitivity of the assay with gold nanospheres. Another approach to improve the sensitivity of the assay included the silver enhancement method, which was used to compare the amplification of LFIA for procalcitonin detection. The sensitivity of procalcitonin determination by this method was 10 times better the sensitivity of the conventional LFIA with gold nanosphere as a label. The proposed approach of LFIA based on gold nanopopcorns improved the detection sensitivity without additional steps and prevented the increased consumption of specific reagents (antibodies).

  4. Neurobehavioral deficits, diseases, and associated costs of exposure to endocrine-disrupting chemicals in the European Union.

    PubMed

    Bellanger, Martine; Demeneix, Barbara; Grandjean, Philippe; Zoeller, R Thomas; Trasande, Leonardo

    2015-04-01

    Epidemiological studies and animal models demonstrate that endocrine-disrupting chemicals (EDCs) contribute to cognitive deficits and neurodevelopmental disabilities. The objective was to estimate neurodevelopmental disability and associated costs that can be reasonably attributed to EDC exposure in the European Union. An expert panel applied a weight-of-evidence characterization adapted from the Intergovernmental Panel on Climate Change. Exposure-response relationships and reference levels were evaluated for relevant EDCs, and biomarker data were organized from peer-reviewed studies to represent European exposure and approximate burden of disease. Cost estimation as of 2010 utilized lifetime economic productivity estimates, lifetime cost estimates for autism spectrum disorder, and annual costs for attention-deficit hyperactivity disorder. Setting, Patients and Participants, and Intervention: Cost estimation was carried out from a societal perspective, ie, including direct costs (eg, treatment costs) and indirect costs such as productivity loss. The panel identified a 70-100% probability that polybrominated diphenyl ether and organophosphate exposures contribute to IQ loss in the European population. Polybrominated diphenyl ether exposures were associated with 873,000 (sensitivity analysis, 148,000 to 2.02 million) lost IQ points and 3290 (sensitivity analysis, 3290 to 8080) cases of intellectual disability, at costs of €9.59 billion (sensitivity analysis, €1.58 billion to €22.4 billion). Organophosphate exposures were associated with 13.0 million (sensitivity analysis, 4.24 million to 17.1 million) lost IQ points and 59 300 (sensitivity analysis, 16,500 to 84,400) cases of intellectual disability, at costs of €146 billion (sensitivity analysis, €46.8 billion to €194 billion). Autism spectrum disorder causation by multiple EDCs was assigned a 20-39% probability, with 316 (sensitivity analysis, 126-631) attributable cases at a cost of €199 million (sensitivity analysis, €79.7 million to €399 million). Attention-deficit hyperactivity disorder causation by multiple EDCs was assigned a 20-69% probability, with 19 300 to 31 200 attributable cases at a cost of €1.21 billion to €2.86 billion. EDC exposures in Europe contribute substantially to neurobehavioral deficits and disease, with a high probability of >€150 billion costs/year. These results emphasize the advantages of controlling EDC exposure.

  5. Sensitivity to imputation models and assumptions in receiver operating characteristic analysis with incomplete data

    PubMed Central

    Karakaya, Jale; Karabulut, Erdem; Yucel, Recai M.

    2015-01-01

    Modern statistical methods using incomplete data have been increasingly applied in a wide variety of substantive problems. Similarly, receiver operating characteristic (ROC) analysis, a method used in evaluating diagnostic tests or biomarkers in medical research, has also been increasingly popular problem in both its development and application. While missing-data methods have been applied in ROC analysis, the impact of model mis-specification and/or assumptions (e.g. missing at random) underlying the missing data has not been thoroughly studied. In this work, we study the performance of multiple imputation (MI) inference in ROC analysis. Particularly, we investigate parametric and non-parametric techniques for MI inference under common missingness mechanisms. Depending on the coherency of the imputation model with the underlying data generation mechanism, our results show that MI generally leads to well-calibrated inferences under ignorable missingness mechanisms. PMID:26379316

  6. Pressurized capillary electrochromatographic analysis of water-soluble vitamins by combining with on-line concentration technique.

    PubMed

    Jia, Li; Liu, Yaling; Du, Yanyan; Xing, Da

    2007-06-22

    A pressurized capillary electrochromatography (pCEC) system was developed for the separation of water-soluble vitamins, in which UV absorbance was used as the detection method and a monolithic silica-ODS column as the separation column. The parameters (type and content of organic solvent in the mobile phase, type and concentration of electrolyte, pH of the electrolyte buffer, applied voltage and flow rate) affecting the separation resolution were evaluated. The combination of two on-line concentration techniques, namely, solvent gradient zone sharpening effect and field-enhanced sample stacking, was utilized to improve detection sensitivity, which proved to be beneficial to enhance the detection sensitivity by enabling the injection of large volumes of samples. Coupling electrokinetic injection with the on-line concentration techniques was much more beneficial for the concentration of positively charged vitamins. Comparing with the conventional injection mode, the enhancement in the detection sensitivities of water-soluble vitamins using the on-line concentration technique is in the range of 3 to 35-fold. The developed pCEC method was applied to evaluate water-soluble vitamins in corns.

  7. Laser-induced breakdown spectroscopy (LIBS), part II: review of instrumental and methodological approaches to material analysis and applications to different fields.

    PubMed

    Hahn, David W; Omenetto, Nicoló

    2012-04-01

    The first part of this two-part review focused on the fundamental and diagnostics aspects of laser-induced plasmas, only touching briefly upon concepts such as sensitivity and detection limits and largely omitting any discussion of the vast panorama of the practical applications of the technique. Clearly a true LIBS community has emerged, which promises to quicken the pace of LIBS developments, applications, and implementations. With this second part, a more applied flavor is taken, and its intended goal is summarizing the current state-of-the-art of analytical LIBS, providing a contemporary snapshot of LIBS applications, and highlighting new directions in laser-induced breakdown spectroscopy, such as novel approaches, instrumental developments, and advanced use of chemometric tools. More specifically, we discuss instrumental and analytical approaches (e.g., double- and multi-pulse LIBS to improve the sensitivity), calibration-free approaches, hyphenated approaches in which techniques such as Raman and fluorescence are coupled with LIBS to increase sensitivity and information power, resonantly enhanced LIBS approaches, signal processing and optimization (e.g., signal-to-noise analysis), and finally applications. An attempt is made to provide an updated view of the role played by LIBS in the various fields, with emphasis on applications considered to be unique. We finally try to assess where LIBS is going as an analytical field, where in our opinion it should go, and what should still be done for consolidating the technique as a mature method of chemical analysis. © 2012 Society for Applied Spectroscopy

  8. Applied Meteorology Unit (AMU) Quarterly Report Third Quarter FY-08

    NASA Technical Reports Server (NTRS)

    Bauman, William; Crawford, Winifred; Barrett, Joe; Watson, Leela; Dreher, Joseph

    2008-01-01

    This report summarizes the Applied Meteorology Unit (AMU) activities for the third quarter of Fiscal Year 2008 (April - June 2008). Tasks reported on are: Peak Wind Tool for User Launch Commit Criteria (LCC), Anvil Forecast Tool in AWIPS Phase II, Completion of the Edward Air Force Base (EAFB) Statistical Guidance Wind Tool, Volume Averaged Height Integ rated Radar Reflectivity (VAHIRR), Impact of Local Sensors, Radar Scan Strategies for the PAFB WSR-74C Replacement, VAHIRR Cost Benefit Analysis, and WRF Wind Sensitivity Study at Edwards Air Force Base

  9. Development of quality assurance methods for epoxy graphite prepreg

    NASA Technical Reports Server (NTRS)

    Chen, J. S.; Hunter, A. B.

    1982-01-01

    Quality assurance methods for graphite epoxy/prepregs were developed. Liquid chromatography, differential scanning calorimetry, and gel permeation chromatography were investigated. These methods were applied to a second prepreg system. The resin matrix formulation was correlated with mechanical properties. Dynamic mechanical analysis and fracture toughness methods were investigated. The chromatography and calorimetry techniques were all successfully developed as quality assurance methods for graphite epoxy prepregs. The liquid chromatography method was the most sensitive to changes in resin formulation. The were also successfully applied to the second prepreg system.

  10. Addiction Research Ethics and the Belmont Principles: Do Drug Users Have a Different Moral Voice?

    PubMed Central

    Fisher, Celia B.

    2013-01-01

    This study used semi-structured interviews and content analysis to examine moral principles that street drug users apply to three hypothetical addiction research ethical dilemmas. Participants (n = 90) were ethnically diverse, economically disadvantaged drug users recruited in New York City in 2009. Participants applied a wide range of contextually sensitive moral precepts, including respect, beneficence, justice, relationality, professional obligations, rules, and pragmatic self-interest. Limitations and implications for future research and the responsible conduct of addiction research are discussed. PMID:21073412

  11. Nursing-sensitive indicators: a concept analysis

    PubMed Central

    Heslop, Liza; Lu, Sai

    2014-01-01

    Aim To report a concept analysis of nursing-sensitive indicators within the applied context of the acute care setting. Background The concept of ‘nursing sensitive indicators’ is valuable to elaborate nursing care performance. The conceptual foundation, theoretical role, meaning, use and interpretation of the concept tend to differ. The elusiveness of the concept and the ambiguity of its attributes may have hindered research efforts to advance its application in practice. Design Concept analysis. Data sources Using ‘clinical indicators’ or ‘quality of nursing care’ as subject headings and incorporating keyword combinations of ‘acute care’ and ‘nurs*’, CINAHL and MEDLINE with full text in EBSCOhost databases were searched for English language journal articles published between 2000–2012. Only primary research articles were selected. Methods A hybrid approach was undertaken, incorporating traditional strategies as per Walker and Avant and a conceptual matrix based on Holzemer's Outcomes Model for Health Care Research. Results The analysis revealed two main attributes of nursing-sensitive indicators. Structural attributes related to health service operation included: hours of nursing care per patient day, nurse staffing. Outcome attributes related to patient care included: the prevalence of pressure ulcer, falls and falls with injury, nosocomial selective infection and patient/family satisfaction with nursing care. Conclusion This concept analysis may be used as a basis to advance understandings of the theoretical structures that underpin both research and practical application of quality dimensions of nursing care performance. PMID:25113388

  12. Computational simulation and aerodynamic sensitivity analysis of film-cooled turbines

    NASA Astrophysics Data System (ADS)

    Massa, Luca

    A computational tool is developed for the time accurate sensitivity analysis of the stage performance of hot gas, unsteady turbine components. An existing turbomachinery internal flow solver is adapted to the high temperature environment typical of the hot section of jet engines. A real gas model and film cooling capabilities are successfully incorporated in the software. The modifications to the existing algorithm are described; both the theoretical model and the numerical implementation are validated. The accuracy of the code in evaluating turbine stage performance is tested using a turbine geometry typical of the last stage of aeronautical jet engines. The results of the performance analysis show that the predictions differ from the experimental data by less than 3%. A reliable grid generator, applicable to the domain discretization of the internal flow field of axial flow turbine is developed. A sensitivity analysis capability is added to the flow solver, by rendering it able to accurately evaluate the derivatives of the time varying output functions. The complex Taylor's series expansion (CTSE) technique is reviewed. Two of them are used to demonstrate the accuracy and time dependency of the differentiation process. The results are compared with finite differences (FD) approximations. The CTSE is more accurate than the FD, but less efficient. A "black box" differentiation of the source code, resulting from the automated application of the CTSE, generates high fidelity sensitivity algorithms, but with low computational efficiency and high memory requirements. New formulations of the CTSE are proposed and applied. Selective differentiation of the method for solving the non-linear implicit residual equation leads to sensitivity algorithms with the same accuracy but improved run time. The time dependent sensitivity derivatives are computed in run times comparable to the ones required by the FD approach.

  13. [Development of selective determination methods for quinones with fluorescence and chemiluminescence detection and their application to environmental and biological samples].

    PubMed

    Kishikawa, Naoya

    2010-10-01

    Quinones are compounds that have various characteristics such as a biological electron transporter, an industrial product and a harmful environmental pollutant. Therefore, an effective determination method for quinones is required in many fields. This review describes the development of sensitive and selective determination methods for quinones based on some detection principles and their application to analyses in environmental, pharmaceutical and biological samples. Firstly, a fluorescence method was developed based on fluorogenic derivatization of quinones and applied to environmental analysis. Secondly, a luminol chemiluminescence method was developed based on generation of reactive oxygen species through the redox cycle of quinone and applied to pharmaceutical analysis. Thirdly, a photo-induced chemiluminescence method was developed based on formation of reactive oxygen species and fluorophore or chemiluminescence enhancer by the photoreaction of quinones and applied to biological and environmental analyses.

  14. Good modeling practice guidelines for applying multimedia models in chemical assessments.

    PubMed

    Buser, Andreas M; MacLeod, Matthew; Scheringer, Martin; Mackay, Don; Bonnell, Mark; Russell, Mark H; DePinto, Joseph V; Hungerbühler, Konrad

    2012-10-01

    Multimedia mass balance models of chemical fate in the environment have been used for over 3 decades in a regulatory context to assist decision making. As these models become more comprehensive, reliable, and accepted, there is a need to recognize and adopt principles of Good Modeling Practice (GMP) to ensure that multimedia models are applied with transparency and adherence to accepted scientific principles. We propose and discuss 6 principles of GMP for applying existing multimedia models in a decision-making context, namely 1) specification of the goals of the model assessment, 2) specification of the model used, 3) specification of the input data, 4) specification of the output data, 5) conduct of a sensitivity and possibly also uncertainty analysis, and finally 6) specification of the limitations and limits of applicability of the analysis. These principles are justified and discussed with a view to enhancing the transparency and quality of model-based assessments. Copyright © 2012 SETAC.

  15. Analysis of plant nucleotide sugars by hydrophilic interaction liquid chromatography and tandem mass spectrometry.

    PubMed

    Ito, Jun; Herter, Thomas; Baidoo, Edward E K; Lao, Jeemeng; Vega-Sánchez, Miguel E; Michelle Smith-Moritz, A; Adams, Paul D; Keasling, Jay D; Usadel, Björn; Petzold, Christopher J; Heazlewood, Joshua L

    2014-03-01

    Understanding the intricate metabolic processes involved in plant cell wall biosynthesis is limited by difficulties in performing sensitive quantification of many involved compounds. Hydrophilic interaction liquid chromatography is a useful technique for the analysis of hydrophilic metabolites from complex biological extracts and forms the basis of this method to quantify plant cell wall precursors. A zwitterionic silica-based stationary phase has been used to separate hydrophilic nucleotide sugars involved in cell wall biosynthesis from milligram amounts of leaf tissue. A tandem mass spectrometry operating in selected reaction monitoring mode was used to quantify nucleotide sugars. This method was highly repeatable and quantified 12 nucleotide sugars at low femtomole quantities, with linear responses up to four orders of magnitude to several 100pmol. The method was also successfully applied to the analysis of purified leaf extracts from two model plant species with variations in their cell wall sugar compositions and indicated significant differences in the levels of 6 out of 12 nucleotide sugars. The plant nucleotide sugar extraction procedure was demonstrated to have good recovery rates with minimal matrix effects. The approach results in a significant improvement in sensitivity when applied to plant samples over currently employed techniques. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Analysis of DNA Cytosine Methylation Patterns Using Methylation-Sensitive Amplification Polymorphism (MSAP).

    PubMed

    Guevara, María Ángeles; de María, Nuria; Sáez-Laguna, Enrique; Vélez, María Dolores; Cervera, María Teresa; Cabezas, José Antonio

    2017-01-01

    Different molecular techniques have been developed to study either the global level of methylated cytosines or methylation at specific gene sequences. One of them is the methylation-sensitive amplified polymorphism technique (MSAP) which is a modification of amplified fragment length polymorphism (AFLP). It has been used to study methylation of anonymous CCGG sequences in different fungi, plants, and animal species. The main variation of this technique resides on the use of isoschizomers with different methylation sensitivity (such as HpaII and MspI) as a frequent-cutter restriction enzyme. For each sample, MSAP analysis is performed using both EcoRI/HpaII- and EcoRI/MspI-digested samples. A comparative analysis between EcoRI/HpaII and EcoRI/MspI fragment patterns allows the identification of two types of polymorphisms: (1) methylation-insensitive polymorphisms that show common EcoRI/HpaII and EcoRI/MspI patterns but are detected as polymorphic amplified fragments among samples and (2) methylation-sensitive polymorphisms which are associated with the amplified fragments that differ in their presence or absence or in their intensity between EcoRI/HpaII and EcoRI/MspI patterns. This chapter describes a detailed protocol of this technique and discusses the modifications that can be applied to adjust the technology to different species of interest.

  17. Chapter 5: Modulation Excitation Spectroscopy with Phase-Sensitive Detection for Surface Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shulda, Sarah; Richards, Ryan M.

    Advancements in in situ spectroscopic techniques have led to significant progress being made in elucidating heterogeneous reaction mechanisms. The potential of these progressive methods is often limited only by the complexity of the system and noise in the data. Short-lived intermediates can be challenging, if not impossible, to identify with conventional spectra analysis means. Often equally difficult is separating signals that arise from active and inactive species. Modulation excitation spectroscopy combined with phase-sensitive detection analysis is a powerful tool for removing noise from the data while simultaneously revealing the underlying kinetics of the reaction. A stimulus is applied at amore » constant frequency to the reaction system, for example, a reactant cycled with an inert phase. Through mathematical manipulation of the data, any signal contributing to the overall spectra but not oscillating with the same frequency as the stimulus will be dampened or removed. With phase-sensitive detection, signals oscillating with the stimulus frequency but with various lag times are amplified providing valuable kinetic information. In this chapter, some examples are provided from the literature that have successfully used modulation excitation spectroscopy with phase-sensitive detection to uncover previously unobserved reaction intermediates and kinetics. Examples from a broad range of spectroscopic methods are included to provide perspective to the reader.« less

  18. Recent development of electrochemiluminescence sensors for food analysis.

    PubMed

    Hao, Nan; Wang, Kun

    2016-10-01

    Food quality and safety are closely related to human health. In the face of unceasing food safety incidents, various analytical techniques, such as mass spectrometry, chromatography, spectroscopy, and electrochemistry, have been applied in food analysis. High sensitivity usually requires expensive instruments and complicated procedures. Although these modern analytical techniques are sensitive enough to ensure food safety, sometimes their applications are limited because of the cost, usability, and speed of analysis. Electrochemiluminescence (ECL) is a powerful analytical technique that is attracting more and more attention because of its outstanding performance. In this review, the mechanisms of ECL and common ECL luminophores are briefly introduced. Then an overall review of the principles and applications of ECL sensors for food analysis is provided. ECL can be flexibly combined with various separation techniques. Novel materials (e.g., various nanomaterials) and strategies (e.g., immunoassay, aptasensors, and microfluidics) have been progressively introduced into the design of ECL sensors. By illustrating some selected representative works, we summarize the state of the art in the development of ECL sensors for toxins, heavy metals, pesticides, residual drugs, illegal additives, viruses, and bacterias. Compared with other methods, ECL can provide rapid, low-cost, and sensitive detection for various food contaminants in complex matrixes. However, there are also some limitations and challenges. Improvements suited to the characteristics of food analysis are still necessary.

  19. THE RELATIVE IMPORTANCE OF THE VADOSE ZONE IN MULTIMEDIA RISK ASSESSMENT MODELING APPLIED AT A NATIONAL SCALE: AN ANALYSIS OF BENZENE USING 3MRA

    EPA Science Inventory

    Evaluating uncertainty and parameter sensitivity in environmental models can be a difficult task, even for low-order, single-media constructs driven by a unique set of site-specific data. The challenge of examining ever more complex, integrated, higher-order models is a formidab...

  20. Sensitivity analysis of tracer transport in variably saturated soils at USDA-ARS OPE3 field site

    USDA-ARS?s Scientific Manuscript database

    The objective of this study was to assess the effects of uncertainties in hydrologic and geochemical parameters on the results of simulations of the tracer transport in variably saturated soils at the USDA-ARS OPE3 field site. A tracer experiment with a pulse of KCL solution applied to an irrigatio...

  1. Asking Sensitive Questions: A Statistical Power Analysis of Randomized Response Models

    ERIC Educational Resources Information Center

    Ulrich, Rolf; Schroter, Hannes; Striegel, Heiko; Simon, Perikles

    2012-01-01

    This article derives the power curves for a Wald test that can be applied to randomized response models when small prevalence rates must be assessed (e.g., detecting doping behavior among elite athletes). These curves enable the assessment of the statistical power that is associated with each model (e.g., Warner's model, crosswise model, unrelated…

  2. Analysis of Transition-Sensitized Turbulent Transport Equations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Thacker, William D.; Gatski, Thomas B.; Grosch, Chester E,

    2005-01-01

    The dynamics of an ensemble of linear disturbances in boundary-layer flows at various Reynolds numbers is studied through an analysis of the transport equations for the mean disturbance kinetic energy and energy dissipation rate. Effects of adverse and favorable pressure-gradients on the disturbance dynamics are also included in the analysis Unlike the fully turbulent regime where nonlinear phase scrambling of the fluctuations affects the flow field even in proximity to the wall, the early stage transition regime fluctuations studied here are influenced cross the boundary layer by the solid boundary. The dominating dynamics in the disturbance kinetic energy and dissipation rate equations are described. These results are then used to formulate transition-sensitized turbulent transport equations, which are solved in a two-step process and applied to zero-pressure-gradient flow over a flat plate. Computed results are in good agreement with experimental data.

  3. Ag nanoparticle-filled TiO2 nanotube arrays prepared by anodization and electrophoretic deposition for dye-sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Wei, Xing; Sugri Nbelayim, Pascal; Kawamura, Go; Muto, Hiroyuki; Matsuda, Atsunori

    2017-03-01

    A layer of TiO2 nanotube (TNT) arrays with a thickness of 13 μm is synthesized by a two-step anodic oxidation from Ti metal foil. Surface charged Ag nanoparticles (NPs) are prepared by chemical reduction. After a pretreatment of the TNT arrays by acetone vapor, Ag NP filled TNT arrays can be achieved by electrophoretic deposition (EPD). Effects of the applied voltage during EPD such as DC-AC difference, frequency and waveform are investigated by quantitative analysis using atomic absorption spectroscopy. The results show that the best EPD condition is using DC 2 V + AC 4 V and a square wave of 1 Hz as the applied voltage. Back illuminated dye-sensitized solar cells are fabricated from TNT arrays with and without Ag NPs. The efficiency increased from 3.70% to 5.01% by the deposition of Ag NPs.

  4. Analysis and Implementation of Methodologies for the Monitoring of Changes in Eye Fundus Images

    NASA Astrophysics Data System (ADS)

    Gelroth, A.; Rodríguez, D.; Salvatelli, A.; Drozdowicz, B.; Bizai, G.

    2011-12-01

    We present a support system for changes detection in fundus images of the same patient taken at different time intervals. This process is useful for monitoring pathologies lasting for long periods of time, as are usually the ophthalmologic. We propose a flow of preprocessing, processing and postprocessing applied to a set of images selected from a public database, presenting pathological advances. A test interface was developed designed to select the images to be compared in order to apply the different methods developed and to display the results. We measure the system performance in terms of sensitivity, specificity and computation times. We have obtained good results, higher than 84% for the first two parameters and processing times lower than 3 seconds for 512x512 pixel images. For the specific case of detection of changes associated with bleeding, the system responds with sensitivity and specificity over 98%.

  5. Conformational effects on circular dichroism in the photoelectron angular distribution.

    PubMed

    Di Tommaso, Devis; Stener, Mauro; Fronzoni, Giovanna; Decleva, Piero

    2006-04-10

    The B-spline density-functional method has been applied to the conformers of the (1R, 2R)-1,2-dibromo-1,2-dichloro-1,2-difluoroethane molecule. The cross section, asymmetry, and dichroic parameters relative to core and valence orbitals, which do not change their nature along the conformational curve, have been systematically studied. While the cross section and the asymmetry parameter are weakly affected, the dichroic parameter appears to be rather sensitive to the particular conformer of the molecule, suggesting that this dynamical property could be a useful tool for conformational analysis. The computational method has also been applied to methyl rotation in methyloxirane. Unexpected and dramatic sensitivity of the dichroic-parameter profile to the methyl rotation, both in the core and valence states, has been found. Boltzmann averaging over the conformers reproduces quite closely the profiles previously obtained for the minimum-energy conformation, which is in good agreement with the experimental results.

  6. Analysis of Photothermal Characterization of Layered Materials: Design of Optimal Experiments

    NASA Technical Reports Server (NTRS)

    Cole, Kevin D.

    2003-01-01

    In this paper numerical calculations are presented for the steady-periodic temperature in layered materials and functionally-graded materials to simulate photothermal methods for the measurement of thermal properties. No laboratory experiments were performed. The temperature is found from a new Green s function formulation which is particularly well-suited to machine calculation. The simulation method is verified by comparison with literature data for a layered material. The method is applied to a class of two-component functionally-graded materials and results for temperature and sensitivity coefficients are presented. An optimality criterion, based on the sensitivity coefficients, is used for choosing what experimental conditions will be needed for photothermal measurements to determine the spatial distribution of thermal properties. This method for optimal experiment design is completely general and may be applied to any photothermal technique and to any functionally-graded material.

  7. Analysis of Lidar Remote Sensing Concepts

    NASA Technical Reports Server (NTRS)

    Spiers, Gary D.

    1999-01-01

    Line of sight velocity and measurement position sensitivity analyses for an orbiting coherent Doppler lidar are developed and applied to two lidars, one with a nadir angle of 30 deg. in a 300 km altitude, 58 deg. inclination orbit and the second for a 45 deg. nadir angle instrument in a 833 km altitude, 89 deg. inclination orbit. The effect of orbit related effects on the backscatter sensitivity of a coherent Doppler lidar is also discussed. Draft performance estimate, error budgets and payload accommodation requirements for the SPARCLE (Space Readiness Coherent Lidar) instrument were also developed and documented.

  8. Using global sensitivity analysis of demographic models for ecological impact assessment.

    PubMed

    Aiello-Lammens, Matthew E; Akçakaya, H Resit

    2017-02-01

    Population viability analysis (PVA) is widely used to assess population-level impacts of environmental changes on species. When combined with sensitivity analysis, PVA yields insights into the effects of parameter and model structure uncertainty. This helps researchers prioritize efforts for further data collection so that model improvements are efficient and helps managers prioritize conservation and management actions. Usually, sensitivity is analyzed by varying one input parameter at a time and observing the influence that variation has over model outcomes. This approach does not account for interactions among parameters. Global sensitivity analysis (GSA) overcomes this limitation by varying several model inputs simultaneously. Then, regression techniques allow measuring the importance of input-parameter uncertainties. In many conservation applications, the goal of demographic modeling is to assess how different scenarios of impact or management cause changes in a population. This is challenging because the uncertainty of input-parameter values can be confounded with the effect of impacts and management actions. We developed a GSA method that separates model outcome uncertainty resulting from parameter uncertainty from that resulting from projected ecological impacts or simulated management actions, effectively separating the 2 main questions that sensitivity analysis asks. We applied this method to assess the effects of predicted sea-level rise on Snowy Plover (Charadrius nivosus). A relatively small number of replicate models (approximately 100) resulted in consistent measures of variable importance when not trying to separate the effects of ecological impacts from parameter uncertainty. However, many more replicate models (approximately 500) were required to separate these effects. These differences are important to consider when using demographic models to estimate ecological impacts of management actions. © 2016 Society for Conservation Biology.

  9. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    NASA Astrophysics Data System (ADS)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  10. A flexible method for residual stress measurement of spray coated layers by laser made hole drilling and SLM based beam steering

    NASA Astrophysics Data System (ADS)

    Osten, W.; Pedrini, G.; Weidmann, P.; Gadow, R.

    2015-08-01

    A minimum invasive but high resolution method for residual stress analysis of ceramic coatings made by thermal spraycoating using a pulsed laser for flexible hole drilling is described. The residual stresses are retrieved by applying the measured surface data for a model-based reconstruction procedure. While the 3D deformations and the profile of the machined area are measured with digital holography, the residual stresses are calculated by FE analysis. To improve the sensitivity of the method, a SLM is applied to control the distribution and the shape of the holes. The paper presents the complete measurement and reconstruction procedure and discusses the advantages and challenges of the new technology.

  11. Optical bio-sniffer for ethanol vapor using an oxygen-sensitive optical fiber.

    PubMed

    Mitsubayashi, Kohji; Kon, Takuo; Hashimoto, Yuki

    2003-11-30

    An optical bio-sniffer for ethanol was constructed by immobilizing alcohol oxidase (AOD) onto a tip of a fiber optic oxygen sensor with a tube-ring, using an oxygen sensitive ruthenium organic complex (excitation, 470 nm; fluorescent, 600 nm). A reaction unit for circulating buffer solution was applied to the tip of the device. After the experiment in the liquid phase, the sniffer-device was applied for gas analysis using a gas flow measurement system with a gas generator. The optical device was applied to detect the oxygen consumption induced by AOD enzymatic reaction with alcohol application. The sensor in the liquid phase was used to measure ethanol solution from 0.50 to 9.09 mmol/l. Then, the bio-sniffer was calibrated against ethanol vapor from 0.71 to 51.49 ppm with good gas-selectivity based on the AOD substrate specificity. The bio-sniffer with the reaction unit was also used to monitor the concentration change of gaseous ethanol by rinsing and cleaning the fiber tip and the enzyme membrane with buffer solution.

  12. Identification of the most sensitive parameters in the activated sludge model implemented in BioWin software.

    PubMed

    Liwarska-Bizukojc, Ewa; Biernacki, Rafal

    2010-10-01

    In order to simulate biological wastewater treatment processes, data concerning wastewater and sludge composition, process kinetics and stoichiometry are required. Selection of the most sensitive parameters is an important step of model calibration. The aim of this work is to verify the predictability of the activated sludge model, which is implemented in BioWin software, and select its most influential kinetic and stoichiometric parameters with the help of sensitivity analysis approach. Two different measures of sensitivity are applied: the normalised sensitivity coefficient (S(i,j)) and the mean square sensitivity measure (delta(j)(msqr)). It occurs that 17 kinetic and stoichiometric parameters of the BioWin activated sludge (AS) model can be regarded as influential on the basis of S(i,j) calculations. Half of the influential parameters are associated with growth and decay of phosphorus accumulating organisms (PAOs). The identification of the set of the most sensitive parameters should support the users of this model and initiate the elaboration of determination procedures for the parameters, for which it has not been done yet. Copyright 2010 Elsevier Ltd. All rights reserved.

  13. The art of spacecraft design: A multidisciplinary challenge

    NASA Technical Reports Server (NTRS)

    Abdi, F.; Ide, H.; Levine, M.; Austel, L.

    1989-01-01

    Actual design turn-around time has become shorter due to the use of optimization techniques which have been introduced into the design process. It seems that what, how and when to use these optimization techniques may be the key factor for future aircraft engineering operations. Another important aspect of this technique is that complex physical phenomena can be modeled by a simple mathematical equation. The new powerful multilevel methodology reduces time-consuming analysis significantly while maintaining the coupling effects. This simultaneous analysis method stems from the implicit function theorem and system sensitivity derivatives of input variables. Use of the Taylor's series expansion and finite differencing technique for sensitivity derivatives in each discipline makes this approach unique for screening dominant variables from nondominant variables. In this study, the current Computational Fluid Dynamics (CFD) aerodynamic and sensitivity derivative/optimization techniques are applied for a simple cone-type forebody of a high-speed vehicle configuration to understand basic aerodynamic/structure interaction in a hypersonic flight condition.

  14. Analysis of world terror networks from the reduced Google matrix of Wikipedia

    NASA Astrophysics Data System (ADS)

    El Zant, Samer; Frahm, Klaus M.; Jaffrès-Runser, Katia; Shepelyansky, Dima L.

    2018-01-01

    We apply the reduced Google matrix method to analyze interactions between 95 terrorist groups and determine their relationships and influence on 64 world countries. This is done on the basis of the Google matrix of the English Wikipedia (2017) composed of 5 416 537 articles which accumulate a great part of global human knowledge. The reduced Google matrix takes into account the direct and hidden links between a selection of 159 nodes (articles) appearing due to all paths of a random surfer moving over the whole network. As a result we obtain the network structure of terrorist groups and their relations with selected countries including hidden indirect links. Using the sensitivity of PageRank to a weight variation of specific links we determine the geopolitical sensitivity and influence of specific terrorist groups on world countries. The world maps of the sensitivity of various countries to influence of specific terrorist groups are obtained. We argue that this approach can find useful application for more extensive and detailed data bases analysis.

  15. An evaluation of computer-aided disproportionality analysis for post-marketing signal detection.

    PubMed

    Lehman, H P; Chen, J; Gould, A L; Kassekert, R; Beninger, P R; Carney, R; Goldberg, M; Goss, M A; Kidos, K; Sharrar, R G; Shields, K; Sweet, A; Wiholm, B E; Honig, P K

    2007-08-01

    To understand the value of computer-aided disproportionality analysis (DA) in relation to current pharmacovigilance signal detection methods, four products were retrospectively evaluated by applying an empirical Bayes method to Merck's post-marketing safety database. Findings were compared with the prior detection of labeled post-marketing adverse events. Disproportionality ratios (empirical Bayes geometric mean lower 95% bounds for the posterior distribution (EBGM05)) were generated for product-event pairs. Overall (1993-2004 data, EBGM05> or =2, individual terms) results of signal detection using DA compared to standard methods were sensitivity, 31.1%; specificity, 95.3%; and positive predictive value, 19.9%. Using groupings of synonymous labeled terms, sensitivity improved (40.9%). More of the adverse events detected by both methods were detected earlier using DA and grouped (versus individual) terms. With 1939-2004 data, diagnostic properties were similar to those from 1993 to 2004. DA methods using Merck's safety database demonstrate sufficient sensitivity and specificity to be considered for use as an adjunct to conventional signal detection methods.

  16. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Tolson, Bryan

    2017-04-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. Subsequently, we focus on the model-independency by testing the frugal method using the hydrologic model mHM (www.ufz.de/mhm) with about 50 model parameters. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed (and published) sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  17. Sensitivity Analysis to Turbulent Combustion Models for Combustor-Turbine Interactions

    NASA Astrophysics Data System (ADS)

    Miki, Kenji; Moder, Jeff; Liou, Meng-Sing

    2017-11-01

    The recently-updated Open National CombustionCode (Open NCC) equipped with alarge-eddy simulation (LES) is applied to model the flow field inside the Energy Efficient Engine (EEE) in conjunction with sensitivity analysis to turbulent combustion models. In this study, we consider three different turbulence-combustion interaction models, the Eddy-Breakup model (EBU), the Linear-Eddy Model (LEM) and the Probability Density Function (PDF)model as well as the laminar chemistry model. Acomprehensive comparison of the flow field and the flame structure will be provided. One of our main interests isto understand how a different model predicts thermal variation on the surface of the first stage vane. Considering that these models are often used in combustor/turbine communities, this study should provide some guidelines on numerical modeling of combustor-turbine interactions.

  18. Enhancing coronary Wave Intensity Analysis robustness by high order central finite differences.

    PubMed

    Rivolo, Simone; Asrress, Kaleab N; Chiribiri, Amedeo; Sammut, Eva; Wesolowski, Roman; Bloch, Lars Ø; Grøndal, Anne K; Hønge, Jesper L; Kim, Won Y; Marber, Michael; Redwood, Simon; Nagel, Eike; Smith, Nicolas P; Lee, Jack

    2014-09-01

    Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. Studies have identified WIA-derived indices that are closely correlated with several disease processes and predictive of functional recovery following myocardial infarction. The cWIA clinical application has, however, been limited by technical challenges including a lack of standardization across different studies and the derived indices' sensitivity to the processing parameters. Specifically, a critical step in WIA is the noise removal for evaluation of derivatives of the acquired signals, typically performed by applying a Savitzky-Golay filter, to reduce the high frequency acquisition noise. The impact of the filter parameter selection on cWIA output, and on the derived clinical metrics (integral areas and peaks of the major waves), is first analysed. The sensitivity analysis is performed either by using the filter as a differentiator to calculate the signals' time derivative or by applying the filter to smooth the ensemble-averaged waveforms. Furthermore, the power-spectrum of the ensemble-averaged waveforms contains little high-frequency components, which motivated us to propose an alternative approach to compute the time derivatives of the acquired waveforms using a central finite difference scheme. The cWIA output and consequently the derived clinical metrics are significantly affected by the filter parameters, irrespective of its use as a smoothing filter or a differentiator. The proposed approach is parameter-free and, when applied to the 10 in-vivo human datasets and the 50 in-vivo animal datasets, enhances the cWIA robustness by significantly reducing the outcome variability (by 60%).

  19. Sensitivity analysis of machine-learning models of hydrologic time series

    NASA Astrophysics Data System (ADS)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  20. Sensitivity of land surface modeling to parameters: An uncertainty quantification method applied to the Community Land Model

    NASA Astrophysics Data System (ADS)

    Ricciuto, D. M.; Mei, R.; Mao, J.; Hoffman, F. M.; Kumar, J.

    2015-12-01

    Uncertainties in land parameters could have important impacts on simulated water and energy fluxes and land surface states, which will consequently affect atmospheric and biogeochemical processes. Therefore, quantification of such parameter uncertainties using a land surface model is the first step towards better understanding of predictive uncertainty in Earth system models. In this study, we applied a random-sampling, high-dimensional model representation (RS-HDMR) method to analyze the sensitivity of simulated photosynthesis, surface energy fluxes and surface hydrological components to selected land parameters in version 4.5 of the Community Land Model (CLM4.5). Because of the large computational expense of conducting ensembles of global gridded model simulations, we used the results of a previous cluster analysis to select one thousand representative land grid cells for simulation. Plant functional type (PFT)-specific uniform prior ranges for land parameters were determined using expert opinion and literature survey, and samples were generated with a quasi-Monte Carlo approach-Sobol sequence. Preliminary analysis of 1024 simulations suggested that four PFT-dependent parameters (including slope of the conductance-photosynthesis relationship, specific leaf area at canopy top, leaf C:N ratio and fraction of leaf N in RuBisco) are the dominant sensitive parameters for photosynthesis, surface energy and water fluxes across most PFTs, but with varying importance rankings. On the other hand, for surface ans sub-surface runoff, PFT-independent parameters, such as the depth-dependent decay factors for runoff, play more important roles than the previous four PFT-dependent parameters. Further analysis by conditioning the results on different seasons and years are being conducted to provide guidance on how climate variability and change might affect such sensitivity. This is the first step toward coupled simulations including biogeochemical processes, atmospheric processes or both to determine the full range of sensitivity of Earth system modeling to land-surface parameters. This can facilitate sampling strategies in measurement campaigns targeted at reduction of climate modeling uncertainties and can also provide guidance on land parameter calibration for simulation optimization.

  1. Substructure Versus Property-Level Dispersed Modes Calculation

    NASA Technical Reports Server (NTRS)

    Stewart, Eric C.; Peck, Jeff A.; Bush, T. Jason; Fulcher, Clay W.

    2016-01-01

    This paper calculates the effect of perturbed finite element mass and stiffness values on the eigenvectors and eigenvalues of the finite element model. The structure is perturbed in two ways: at the "subelement" level and at the material property level. In the subelement eigenvalue uncertainty analysis the mass and stiffness of each subelement is perturbed by a factor before being assembled into the global matrices. In the property-level eigenvalue uncertainty analysis all material density and stiffness parameters of the structure are perturbed modified prior to the eigenvalue analysis. The eigenvalue and eigenvector dispersions of each analysis (subelement and property-level) are also calculated using an analytical sensitivity approximation. Two structural models are used to compare these methods: a cantilevered beam model, and a model of the Space Launch System. For each structural model it is shown how well the analytical sensitivity modes approximate the exact modes when the uncertainties are applied at the subelement level and at the property level.

  2. Univariate and multivariate analysis of tannin-impregnated wood species using vibrational spectroscopy.

    PubMed

    Schnabel, Thomas; Musso, Maurizio; Tondi, Gianluca

    2014-01-01

    Vibrational spectroscopy is one of the most powerful tools in polymer science. Three main techniques--Fourier transform infrared spectroscopy (FT-IR), FT-Raman spectroscopy, and FT near-infrared (NIR) spectroscopy--can also be applied to wood science. Here, these three techniques were used to investigate the chemical modification occurring in wood after impregnation with tannin-hexamine preservatives. These spectroscopic techniques have the capacity to detect the externally added tannin. FT-IR has very strong sensitivity to the aromatic peak at around 1610 cm(-1) in the tannin-treated samples, whereas FT-Raman reflects the peak at around 1600 cm(-1) for the externally added tannin. This high efficacy in distinguishing chemical features was demonstrated in univariate analysis and confirmed via cluster analysis. Conversely, the results of the NIR measurements show noticeable sensitivity for small differences. For this technique, multivariate analysis is required and with this chemometric tool, it is also possible to predict the concentration of tannin on the surface.

  3. Measuring Road Network Vulnerability with Sensitivity Analysis

    PubMed Central

    Jun-qiang, Leng; Long-hai, Yang; Liu, Wei-yi; Zhao, Lin

    2017-01-01

    This paper focuses on the development of a method for road network vulnerability analysis, from the perspective of capacity degradation, which seeks to identify the critical infrastructures in the road network and the operational performance of the whole traffic system. This research involves defining the traffic utility index and modeling vulnerability of road segment, route, OD (Origin Destination) pair and road network. Meanwhile, sensitivity analysis method is utilized to calculate the change of traffic utility index due to capacity degradation. This method, compared to traditional traffic assignment, can improve calculation efficiency and make the application of vulnerability analysis to large actual road network possible. Finally, all the above models and calculation method is applied to actual road network evaluation to verify its efficiency and utility. This approach can be used as a decision-supporting tool for evaluating the performance of road network and identifying critical infrastructures in transportation planning and management, especially in the resource allocation for mitigation and recovery. PMID:28125706

  4. Spectrophotometric studies of reactions between pseudo-ephedrine with different inorganic and organic reagents and its micro-determination in pure and in pharmaceutical preparations.

    PubMed

    Zayed, M A; El-Rasheedy, El-Gazy A

    2012-03-01

    Two simple, sensitive, cheep and reliable spectrophotometric methods are suggested for micro-determination of pseudoephedrine in its pure form and in pharmaceutical preparation (Sinofree Tablets). The first one depends on the drug reaction with inorganic sensitive reagent like molybdate anion in aqueous media via formation of ion-pair mechanism. The second one depends on the drug reaction with π-acceptor reagent like DDQ in non-aqueous media via formation of charge transfer complex. These reactions were studied under various conditions and the optimum parameters were selected. Under proper conditions the suggested procedures were successfully applied for micro-determination of pseudoephedrine in pure and in Sinofree Tablets without interference from excepients. The values of SD, RSD, recovery %, LOD, LOQ and Sandell sensitivity refer to the high accuracy and precession of the applied procedures. The results obtained were compared with the data obtained by an official method, referring to confidence and agreement with DDQ procedure results; but it referred to the more accuracy of the molybdate data. Therefore, the suggested procedures are now successfully being applied in routine analysis of this drug in its pharmaceutical formulation (Sinofree) in Saudi Arabian Pharmaceutical Company (SPIMACO) in Boridah El-Qaseem, Saudi Arabia instead of imported kits had been previously used. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Nanoliter-Scale Oil-Air-Droplet Chip-Based Single Cell Proteomic Analysis.

    PubMed

    Li, Zi-Yi; Huang, Min; Wang, Xiu-Kun; Zhu, Ying; Li, Jin-Song; Wong, Catherine C L; Fang, Qun

    2018-04-17

    Single cell proteomic analysis provides crucial information on cellular heterogeneity in biological systems. Herein, we describe a nanoliter-scale oil-air-droplet (OAD) chip for achieving multistep complex sample pretreatment and injection for single cell proteomic analysis in the shotgun mode. By using miniaturized stationary droplet microreaction and manipulation techniques, our system allows all sample pretreatment and injection procedures to be performed in a nanoliter-scale droplet with minimum sample loss and a high sample injection efficiency (>99%), thus substantially increasing the analytical sensitivity for single cell samples. We applied the present system in the proteomic analysis of 100 ± 10, 50 ± 5, 10, and 1 HeLa cell(s), and protein IDs of 1360, 612, 192, and 51 were identified, respectively. The OAD chip-based system was further applied in single mouse oocyte analysis, with 355 protein IDs identified at the single oocyte level, which demonstrated its special advantages of high enrichment of sequence coverage, hydrophobic proteins, and enzymatic digestion efficiency over the traditional in-tube system.

  6. Network analysis of mesoscale optical recordings to assess regional, functional connectivity.

    PubMed

    Lim, Diana H; LeDue, Jeffrey M; Murphy, Timothy H

    2015-10-01

    With modern optical imaging methods, it is possible to map structural and functional connectivity. Optical imaging studies that aim to describe large-scale neural connectivity often need to handle large and complex datasets. In order to interpret these datasets, new methods for analyzing structural and functional connectivity are being developed. Recently, network analysis, based on graph theory, has been used to describe and quantify brain connectivity in both experimental and clinical studies. We outline how to apply regional, functional network analysis to mesoscale optical imaging using voltage-sensitive-dye imaging and channelrhodopsin-2 stimulation in a mouse model. We include links to sample datasets and an analysis script. The analyses we employ can be applied to other types of fluorescence wide-field imaging, including genetically encoded calcium indicators, to assess network properties. We discuss the benefits and limitations of using network analysis for interpreting optical imaging data and define network properties that may be used to compare across preparations or other manipulations such as animal models of disease.

  7. Optical assay for biotechnology and clinical diagnosis.

    PubMed

    Moczko, Ewa; Cauchi, Michael; Turner, Claire; Meglinski, Igor; Piletsky, Sergey

    2011-08-01

    In this paper, we present an optical diagnostic assay consisting of a mixture of environmental-sensitive fluorescent dyes combined with multivariate data analysis for quantitative and qualitative examination of biological and clinical samples. The performance of the assay is based on the analysis of spectrum of the selected fluorescent dyes with the operational principle similar to electronic nose and electronic tongue systems. This approach has been successfully applied for monitoring of growing cell cultures and identification of gastrointestinal diseases in humans.

  8. Oil and Gas Supply Modeling

    NASA Astrophysics Data System (ADS)

    Gass, S. I.

    1982-05-01

    The theoretical and applied state of the art of oil and gas supply models was discussed. The following areas were addressed: the realities of oil and gas supply, prediction of oil and gas production, problems in oil and gas modeling, resource appraisal procedures, forecasting field size and production, investment and production strategies, estimating cost and production schedules for undiscovered fields, production regulations, resource data, sensitivity analysis of forecasts, econometric analysis of resource depletion, oil and gas finding rates, and various models of oil and gas supply.

  9. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance.

    PubMed

    Kepes, Sven; McDaniel, Michael A

    2015-01-01

    Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation.

  10. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  11. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance

    PubMed Central

    2015-01-01

    Introduction Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. Methods To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Results Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. Conclusion The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation. PMID:26517553

  12. Comparative peptidomics analysis of neural adaptations in rats repeatedly exposed to amphetamine.

    PubMed

    Romanova, Elena V; Lee, Ji Eun; Kelleher, Neil L; Sweedler, Jonathan V; Gulley, Joshua M

    2012-10-01

    Repeated exposure to amphetamine (AMPH) induces long-lasting behavioral changes, referred to as sensitization, that are accompanied by various neuroadaptations in the brain. To investigate the chemical changes that occur during behavioral sensitization, we applied a comparative proteomics approach to screen for neuropeptide changes in a rodent model of AMPH-induced sensitization. By measuring peptide profiles with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry and comparing signal intensities using principal component analysis and variance statistics, subsets of peptides are found with significant differences in the dorsal striatum, nucleus accumbens, and medial prefrontal cortex of AMPH-sensitized male Sprague-Dawley rats. These biomarker peptides, identified in follow-up analyses using liquid chromatography and tandem mass spectrometry, suggest that behavioral sensitization to AMPH is associated with complex chemical adaptations that regulate energy/metabolism, neurotransmission, apoptosis, neuroprotection, and neuritogenesis, as well as cytoskeleton integrity and neuronal morphology. Our data contribute to a growing number of reports showing that in addition to the mesolimbic dopamine system, which is the best known signaling pathway involved with reinforcing the effect of psychostimulants, concomitant chemical changes in other pathways and in neuronal organization may play a part in the overall effect of chronic AMPH exposure on behavior. © 2012 The Authors Journal of Neurochemistry © 2012 International Society for Neurochemistry.

  13. Highly Sensitive Hot-Wire Anemometry Based on Macro-Sized Double-Walled Carbon Nanotube Strands.

    PubMed

    Wang, Dingqu; Xiong, Wei; Zhou, Zhaoying; Zhu, Rong; Yang, Xing; Li, Weihua; Jiang, Yueyuan; Zhang, Yajun

    2017-08-01

    This paper presents a highly sensitive flow-rate sensor with carbon nanotubes (CNTs) as sensing elements. The sensor uses micro-size centimeters long double-walled CNT (DWCNT) strands as hot-wires to sense fluid velocity. In the theoretical analysis, the sensitivity of the sensor is demonstrated to be positively related to the ratio of its surface. We assemble the flow sensor by suspending the DWCNT strand directly on two tungsten prongs and dripping a small amount of silver glue onto each contact between the DWCNT and the prongs. The DWCNT exhibits a positive TCR of 1980 ppm/K. The self-heating effect on the DWCNT was observed while constant current was applied between the two prongs. This sensor can evidently respond to flow rate, and requires only several milliwatts to operate. We have, thus far, demonstrated that the CNT-based flow sensor has better sensitivity than the Pt-coated DWCNT sensor.

  14. Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Mao, Lei; Jackson, Lisa

    2016-10-01

    In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.

  15. Sensitivity Analysis in Sequential Decision Models.

    PubMed

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  16. ASKI: A modular toolbox for scattering-integral-based seismic full waveform inversion and sensitivity analysis utilizing external forward codes

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang

    Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).

  17. A generalized matching law analysis of cocaine vs. food choice in rhesus monkeys: effects of candidate 'agonist-based' medications on sensitivity to reinforcement.

    PubMed

    Hutsell, Blake A; Negus, S Stevens; Banks, Matthew L

    2015-01-01

    We have previously demonstrated reductions in cocaine choice produced by either continuous 14-day phendimetrazine and d-amphetamine treatment or removing cocaine availability under a cocaine vs. food choice procedure in rhesus monkeys. The aim of the present investigation was to apply the concatenated generalized matching law (GML) to cocaine vs. food choice dose-effect functions incorporating sensitivity to both the relative magnitude and price of each reinforcer. Our goal was to determine potential behavioral mechanisms underlying pharmacological treatment efficacy to decrease cocaine choice. A multi-model comparison approach was used to characterize dose- and time-course effects of both pharmacological and environmental manipulations on sensitivity to reinforcement. GML models provided an excellent fit of the cocaine choice dose-effect functions in individual monkeys. Reductions in cocaine choice by both pharmacological and environmental manipulations were principally produced by systematic decreases in sensitivity to reinforcer price and non-systematic changes in sensitivity to reinforcer magnitude. The modeling approach used provides a theoretical link between the experimental analysis of choice and pharmacological treatments being evaluated as candidate 'agonist-based' medications for cocaine addiction. The analysis suggests that monoamine releaser treatment efficacy to decrease cocaine choice was mediated by selectively increasing the relative price of cocaine. Overall, the net behavioral effect of these pharmacological treatments was to increase substitutability of food pellets, a nondrug reinforcer, for cocaine. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. On-line focusing of flavin derivatives using Dynamic pH junction-sweeping capillary electrophoresis with laser-induced fluorescence detection.

    PubMed

    Britz-McKibbin, Philip; Otsuka, Koji; Terabe, Shigeru

    2002-08-01

    Simple yet effective methods to enhance concentration sensitivity is needed for capillary electrophoresis (CE) to become a practical method to analyze trace levels of analytes in real samples. In this report, the development of a novel on-line preconcentration technique combining dynamic pH junction and sweeping modes of focusing is applied to the sensitive and selective analysis of three flavin derivatives: riboflavin, flavin mononucleotide (FMN) and flavin adenine dinucleotide (FAD). Picomolar (pM) detectability of flavins by CE with laser-induced fluorescence (LIF) detection is demonstrated through effective focusing of large sample volumes (up to 22% capillary length) using a dual pH junction-sweeping focusing mode. This results in greater than a 1,200-fold improvement in sensitivity relative to conventional injection methods, giving a limit of detection (S/N = 3) of approximately 4.0 pM for FAD and FMN. Flavin focusing is examined in terms of analyte mobility dependence on buffer pH, borate complexation and SDS interaction. Dynamic pH junction-sweeping extends on-line focusing to both neutral (hydrophobic) and weakly acidic (hydrophilic) species and is considered useful in cases when either conventional sweeping or dynamic pH junction techniques used alone are less effective for certain classes of analytes. Enhanced focusing performance by this hyphenated method was demonstrated by greater than a 4-fold reduction in flavin bandwidth, as compared to either sweeping or dynamic pH junction, reflected by analyte detector bandwidths <0.20 cm. Novel on-line focusing strategies are required to improve sensitivity in CE, which may be applied toward more effective biochemical analysis methods for diverse types of analytes.

  19. Nonindependence and sensitivity analyses in ecological and evolutionary meta-analyses.

    PubMed

    Noble, Daniel W A; Lagisz, Malgorzata; O'dea, Rose E; Nakagawa, Shinichi

    2017-05-01

    Meta-analysis is an important tool for synthesizing research on a variety of topics in ecology and evolution, including molecular ecology, but can be susceptible to nonindependence. Nonindependence can affect two major interrelated components of a meta-analysis: (i) the calculation of effect size statistics and (ii) the estimation of overall meta-analytic estimates and their uncertainty. While some solutions to nonindependence exist at the statistical analysis stages, there is little advice on what to do when complex analyses are not possible, or when studies with nonindependent experimental designs exist in the data. Here we argue that exploring the effects of procedural decisions in a meta-analysis (e.g. inclusion of different quality data, choice of effect size) and statistical assumptions (e.g. assuming no phylogenetic covariance) using sensitivity analyses are extremely important in assessing the impact of nonindependence. Sensitivity analyses can provide greater confidence in results and highlight important limitations of empirical work (e.g. impact of study design on overall effects). Despite their importance, sensitivity analyses are seldom applied to problems of nonindependence. To encourage better practice for dealing with nonindependence in meta-analytic studies, we present accessible examples demonstrating the impact that ignoring nonindependence can have on meta-analytic estimates. We also provide pragmatic solutions for dealing with nonindependent study designs, and for analysing dependent effect sizes. Additionally, we offer reporting guidelines that will facilitate disclosure of the sources of nonindependence in meta-analyses, leading to greater transparency and more robust conclusions. © 2017 John Wiley & Sons Ltd.

  20. CORSSTOL: Cylinder Optimization of Rings, Skin, and Stringers with Tolerance sensitivity

    NASA Technical Reports Server (NTRS)

    Finckenor, J.; Bevill, M.

    1995-01-01

    Cylinder Optimization of Rings, Skin, and Stringers with Tolerance (CORSSTOL) sensitivity is a design optimization program incorporating a method to examine the effects of user-provided manufacturing tolerances on weight and failure. CORSSTOL gives designers a tool to determine tolerances based on need. This is a decisive way to choose the best design among several manufacturing methods with differing capabilities and costs. CORSSTOL initially optimizes a stringer-stiffened cylinder for weight without tolerances. The skin and stringer geometry are varied, subject to stress and buckling constraints. Then the same analysis and optimization routines are used to minimize the maximum material condition weight subject to the least favorable combination of tolerances. The adjusted optimum dimensions are provided with the weight and constraint sensitivities of each design variable. The designer can immediately identify critical tolerances. The safety of parts made out of tolerance can also be determined. During design and development of weight-critical systems, design/analysis tools that provide product-oriented results are of vital significance. The development of this program and methodology provides designers with an effective cost- and weight-saving design tool. The tolerance sensitivity method can be applied to any system defined by a set of deterministic equations.

  1. Sampling and sensitivity analyses tools (SaSAT) for computational modelling

    PubMed Central

    Hoare, Alexander; Regan, David G; Wilson, David P

    2008-01-01

    SaSAT (Sampling and Sensitivity Analysis Tools) is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated. PMID:18304361

  2. Studentized continuous wavelet transform (t-CWT) in the analysis of individual ERPs: real and simulated EEG data

    PubMed Central

    Real, Ruben G. L.; Kotchoubey, Boris; Kübler, Andrea

    2014-01-01

    This study aimed at evaluating the performance of the Studentized Continuous Wavelet Transform (t-CWT) as a method for the extraction and assessment of event-related brain potentials (ERP) in data from a single subject. Sensitivity, specificity, positive (PPV) and negative predictive values (NPV) of the t-CWT were assessed and compared to a variety of competing procedures using simulated EEG data at six low signal-to-noise ratios. Results show that the t-CWT combines high sensitivity and specificity with favorable PPV and NPV. Applying the t-CWT to authentic EEG data obtained from 14 healthy participants confirmed its high sensitivity. The t-CWT may thus be well suited for the assessment of weak ERPs in single-subject settings. PMID:25309308

  3. Studentized continuous wavelet transform (t-CWT) in the analysis of individual ERPs: real and simulated EEG data.

    PubMed

    Real, Ruben G L; Kotchoubey, Boris; Kübler, Andrea

    2014-01-01

    This study aimed at evaluating the performance of the Studentized Continuous Wavelet Transform (t-CWT) as a method for the extraction and assessment of event-related brain potentials (ERP) in data from a single subject. Sensitivity, specificity, positive (PPV) and negative predictive values (NPV) of the t-CWT were assessed and compared to a variety of competing procedures using simulated EEG data at six low signal-to-noise ratios. Results show that the t-CWT combines high sensitivity and specificity with favorable PPV and NPV. Applying the t-CWT to authentic EEG data obtained from 14 healthy participants confirmed its high sensitivity. The t-CWT may thus be well suited for the assessment of weak ERPs in single-subject settings.

  4. Online sample concentration and analysis of drugs of abuse in human urine by micelle to solvent stacking in capillary zone electrophoresis.

    PubMed

    Aturki, Zeineb; Fanali, Salvatore; Rocco, Anna

    2016-10-01

    A sensitive and rapid CZE-UV method was developed to determine drugs and their metabolites' presence in human urine. Ten drugs of abuse were analyzed including four amphetamines, cocaine, cocaethylene, heroin, morphine, 6-monoacetylmorphine, and 4-methylmethcathinone. An MSS (micelle to solvent stacking) approach was evaluated to enhance method sensitivity. This method considers composition of the micellar sample solution matrix and the injection time. Several analytical conditions influencing the resolution of the drugs mixture as pH and buffer concentration, organic solvent content, were also investigated. The base-line separation of all studied analytes in the same run was achieved within 18 min in an uncoated fused silica capillary (50 μm id × 60 cm) using a background solution containing 50 mM phosphate buffer pH 2.5 and 30% ACN v/v. Other experimental parameters such as applied voltage and capillary temperature were set up at 20 kV and 20°C, respectively. LOD values ranging between 15 and 75 ng/mL for all studied compounds were obtained. From a comparison with conventional CZE, the proposed method provides an increase of sensitivity (39- to 55-fold enhancement factor). Under optimal MSS-CZE conditions, good linearity was achieved (R 2 ≤ 0.9998). The method was finally applied to the analysis of urine samples spiked with a standard mixture after a sample pretreatment, reaching satisfactory recovery values. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. [Effectiveness of a clinimetric scale for diagnosing vulvovaginal candidosis].

    PubMed

    Reyna Figueroa, Jesús; Morales Rangel, Verónica; Ortiz Ibarra, Federico Javier; Casanova Román, Gerardo; Beltrán Zúñiga, Magdalena

    2004-05-01

    Vulvovaginal candidosis is one of the most frequent infections of the female genital system. It is believed that 75% of women in reproductive life have suffered from vulvovaginal candidosis at least once in their life. To evaluate the effectiveness of a clinimetric scale based on clinical characteristics and factors of risk to diagnose vulvovaginal candidosis and to establish the cut points. A questionnaire was elaborated by means of consensus, there were questions about symptoms and risk factors for the diagnosis. It was applied to women in reproductive age, besides the cervicovaginal culture. The resulting questionnaire was evaluated by means of the analysis of sensitivity according to the criteria of Feinstein. The determination of points of cut was made by means of curves ROC as well as sensitivity, specificity, positive and negative predictive value. The univariable analysis was made by means of the test of chi2 and the exact test of Fisher. We used likelihood ratio for association of variables and the multi-variate logistic regression analysis was made to fit variable potentials. One-hundred forty-two women answered the questionnaire, 39 (27%) had positive isolation to Candida. Vulvar edema (3.49 OR IC 95% 1.16-10.43 p = 0.02) and ardor (OR 2.40 IC 95% 0.88-6.51 p = 0.08) were significant. None of the risk factors were statistically significant. It was reported with sensitivity of 76%, specificity of 38%, VPP 32% and VPN 81% for the = 60 record. The clinimetric scale is brief, valid and easy to apply. It can be used independently of the educative or cultural factors that limit the accomplishment of other forms of diagnosis, such as the vaginal flow exam and the cervicovaginal culture.

  6. Electronic Noses and Tongues in Wine Industry

    PubMed Central

    Rodríguez-Méndez, María L.; De Saja, José A.; González-Antón, Rocio; García-Hernández, Celia; Medina-Plaza, Cristina; García-Cabezón, Cristina; Martín-Pedrosa, Fernando

    2016-01-01

    The quality of wines is usually evaluated by a sensory panel formed of trained experts or traditional chemical analysis. Over the last few decades, electronic noses (e-noses) and electronic tongues have been developed to determine the quality of foods and beverages. They consist of arrays of sensors with cross-sensitivity, combined with pattern recognition software, which provide a fingerprint of the samples that can be used to discriminate or classify the samples. This holistic approach is inspired by the method used in mammals to recognize food through their senses. They have been widely applied to the analysis of wines, including quality control, aging control, or the detection of fraudulence, among others. In this paper, the current status of research and development in the field of e-noses and tongues applied to the analysis of wines is reviewed. Their potential applications in the wine industry are described. The review ends with a final comment about expected future developments. PMID:27826547

  7. Advances in Molecular Rotational Spectroscopy for Applied Science

    NASA Astrophysics Data System (ADS)

    Harris, Brent; Fields, Shelby S.; Pulliam, Robin; Muckle, Matt; Neill, Justin L.

    2017-06-01

    Advances in chemical sensitivity and robust, solid-state designs for microwave/millimeter-wave instrumentation compel the expansion of molecular rotational spectroscopy as research tool into applied science. It is familiar to consider molecular rotational spectroscopy for air analysis. Those techniques for molecular rotational spectroscopy are included in our presentation of a more broad application space for materials analysis using Fourier Transform Molecular Rotational Resonance (FT-MRR) spectrometers. There are potentially transformative advantages for direct gas analysis of complex mixtures, determination of unknown evolved gases with parts per trillion detection limits in solid materials, and unambiguous chiral determination. The introduction of FT-MRR as an alternative detection principle for analytical chemistry has created a ripe research space for the development of new analytical methods and sampling equipment to fully enable FT-MRR. We present the current state of purpose-built FT-MRR instrumentation and the latest application measurements that make use of new sampling methods.

  8. Desorption atmospheric pressure photoionization and direct analysis in real time coupled with travelling wave ion mobility mass spectrometry.

    PubMed

    Räsänen, Riikka-Marjaana; Dwivedi, Prabha; Fernández, Facundo M; Kauppila, Tiina J

    2014-11-15

    Ambient mass spectrometry (MS) is a tool for screening analytes directly from sample surfaces. However, background impurities may complicate the spectra and therefore fast separation techniques are needed. Here, we demonstrate the use of travelling wave ion mobility spectrometry in a comparative study of two ambient MS techniques. Desorption atmospheric pressure photoionization (DAPPI) and direct analysis in real time (DART) were coupled with travelling wave ion mobility mass spectrometry (TWIM-MS) for highly selective surface analysis. The ionization efficiencies of DAPPI and DART were compared. Test compounds were: bisphenol A, benzo[a]pyrene, ranitidine, cortisol and α-tocopherol. DAPPI-MS and DART-TWIM-MS were also applied to the analysis of chloroquine from dried blood spots, and α-tocopherol from almond surface, and DAPPI-TWIM-MS was applied to analysis of pharmaceuticals and multivitamin tablets. DAPPI was approximately 100 times more sensitive than DART for bisphenol A and 10-20 times more sensitive for the other compounds. The limits of detection were between 30-290 and 330-8200 fmol for DAPPI and DART, respectively. Also, from the authentic samples, DAPPI ionized chloroquine and α-tocopherol more efficiently than DART. The mobility separation enabled the detection of species with low signal intensities, e.g. thiamine and cholecalciferol, in the DAPPI-TWIM-MS analysis of multivitamin tablets. DAPPI ionized the studied compounds of interest more efficiently than DART. For both DAPPI and DART, the mobility separation prior to MS analysis reduced the amount of chemical noise in the mass spectrum and significantly increased the signal-to-noise ratio for the analytes. Copyright © 2014 John Wiley & Sons, Ltd.

  9. Direct differentiation of the quasi-incompressible fluid formulation of fluid-structure interaction using the PFEM

    NASA Astrophysics Data System (ADS)

    Zhu, Minjie; Scott, Michael H.

    2017-07-01

    Accurate and efficient response sensitivities for fluid-structure interaction (FSI) simulations are important for assessing the uncertain response of coastal and off-shore structures to hydrodynamic loading. To compute gradients efficiently via the direct differentiation method (DDM) for the fully incompressible fluid formulation, approximations of the sensitivity equations are necessary, leading to inaccuracies of the computed gradients when the geometry of the fluid mesh changes rapidly between successive time steps or the fluid viscosity is nonzero. To maintain accuracy of the sensitivity computations, a quasi-incompressible fluid is assumed for the response analysis of FSI using the particle finite element method and DDM is applied to this formulation, resulting in linearized equations for the response sensitivity that are consistent with those used to compute the response. Both the response and the response sensitivity can be solved using the same unified fractional step method. FSI simulations show that although the response using the quasi-incompressible and incompressible fluid formulations is similar, only the quasi-incompressible approach gives accurate response sensitivity for viscous, turbulent flows regardless of time step size.

  10. Electrochemical Sensors for Clinic Analysis

    PubMed Central

    Wang, You; Xu, Hui; Zhang, Jianming; Li, Guang

    2008-01-01

    Demanded by modern medical diagnosis, advances in microfabrication technology have led to the development of fast, sensitive and selective electrochemical sensors for clinic analysis. This review addresses the principles behind electrochemical sensor design and fabrication, and introduces recent progress in the application of electrochemical sensors to analysis of clinical chemicals such as blood gases, electrolytes, metabolites, DNA and antibodies, including basic and applied research. Miniaturized commercial electrochemical biosensors will form the basis of inexpensive and easy to use devices for acquiring chemical information to bring sophisticated analytical capabilities to the non-specialist and general public alike in the future. PMID:27879810

  11. Purification of Derivatized Oligosaccharides by Solid Phase Extraction for Glycomic Analysis

    PubMed Central

    Zhang, Qiwei; Li, Henghui; Feng, Xiaojun; Liu, Bi-Feng; Liu, Xin

    2014-01-01

    Profiling of glycans released from proteins is very complex and important. To enhance the detection sensitivity, chemical derivatization is required for the analysis of carbohydrates. Due to the interference of excess reagents, a simple and reliable purification method is usually necessary for the derivatized oligosaccharides. Various SPE based methods have been applied for the clean-up process. To demonstrate the differences among these methods, seven types of self-packed SPE cartridges were systematically compared in this study. The optimized conditions were determined for each type of cartridge and it was found that microcrystalline cellulose was the most appropriate SPE material for the purification of derivatized oligosaccharide. Normal phase HPLC analysis of the derivatized maltoheptaose was realized with a detection limit of 0.12 pmol (S N−1 = 3) and a recovery over 70%. With the optimized SPE method, relative quantification analysis of N-glycans from model glycoproteins were carried out accurately and over 40 N-glycans from human serum samples were determined regardless of the isomers. Due to the high stability and sensitivity, microcrystalline cellulose cartridge showed potential applications in glycomics analysis. PMID:24705408

  12. Monitoring of beer fermentation based on hybrid electronic tongue.

    PubMed

    Kutyła-Olesiuk, Anna; Zaborowski, Michał; Prokaryn, Piotr; Ciosek, Patrycja

    2012-10-01

    Monitoring of biotechnological processes, including fermentation is extremely important because of the rapidly occurring changes in the composition of the samples during the production. In the case of beer, the analysis of physicochemical parameters allows for the determination of the stage of fermentation process and the control of its possible perturbations. As a tool to control the beer production process a sensor array can be used, composed of potentiometric and voltammetric sensors (so-called hybrid Electronic Tongue, h-ET). The aim of this study is to apply electronic tongue system to distinguish samples obtained during alcoholic fermentation. The samples originate from batch of homemade beer fermentation and from two stages of the process: fermentation reaction and maturation of beer. The applied sensor array consists of 10 miniaturized ion-selective electrodes (potentiometric ET) and silicon based 3-electrode voltammetric transducers (voltammetric ET). The obtained results were processed using Partial Least Squares (PLS) and Partial Least Squares-Discriminant Analysis (PLS-DA). For potentiometric data, voltammetric data, and combined potentiometric and voltammetric data, comparison of the classification ability was conducted based on Root Mean Squared Error (RMSE), sensitivity, specificity, and coefficient F calculation. It is shown, that in the contrast to the separately used techniques, the developed hybrid system allowed for a better characterization of the beer samples. Data fusion in hybrid ET enables to obtain better results both in qualitative analysis (RMSE, specificity, sensitivity) and in quantitative analysis (RMSE, R(2), a, b). Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Computing the sensitivity of drag and lift in flow past a circular cylinder: Time-stepping versus self-consistent analysis

    NASA Astrophysics Data System (ADS)

    Meliga, Philippe

    2017-07-01

    We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to which relevant information can be gained from a hybrid modeling computing self-consistent sensitivities from the postprocessing of DNS data. Application to alternative control objectives such as increasing the lift and alleviating the fluctuating drag and lift is also discussed.

  14. Optimizing human activity patterns using global sensitivity analysis.

    PubMed

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  15. Optimizing human activity patterns using global sensitivity analysis

    PubMed Central

    Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2014-01-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080

  16. Optimizing human activity patterns using global sensitivity analysis

    DOE PAGES

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; ...

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less

  17. CXTFIT/Excel A modular adaptable code for parameter estimation, sensitivity analysis and uncertainty analysis for laboratory or field tracer experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; Mayes, Melanie; Parker, Jack C

    2010-01-01

    We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less

  18. A value-based medicine cost-utility analysis of idiopathic epiretinal membrane surgery.

    PubMed

    Gupta, Omesh P; Brown, Gary C; Brown, Melissa M

    2008-05-01

    To perform a reference case, cost-utility analysis of epiretinal membrane (ERM) surgery using current literature on outcomes and complications. Computer-based, value-based medicine analysis. Decision analyses were performed under two scenarios: ERM surgery in better-seeing eye and ERM surgery in worse-seeing eye. The models applied long-term published data primarily from the Blue Mountains Eye Study and the Beaver Dam Eye Study. Visual acuity and major complications were derived from 25-gauge pars plana vitrectomy studies. Patient-based, time trade-off utility values, Markov modeling, sensitivity analysis, and net present value adjustments were used in the design and calculation of results. Main outcome measures included the number of discounted quality-adjusted-life-years (QALYs) gained and dollars spent per QALY gained. ERM surgery in the better-seeing eye compared with observation resulted in a mean gain of 0.755 discounted QALYs (3% annual rate) per patient treated. This model resulted in $4,680 per QALY for this procedure. When sensitivity analysis was performed, utility values varied from $6,245 to $3,746/QALY gained, medical costs varied from $3,510 to $5,850/QALY gained, and ERM recurrence rate increased to $5,524/QALY. ERM surgery in the worse-seeing eye compared with observation resulted in a mean gain of 0.27 discounted QALYs per patient treated. The $/QALY was $16,146 with a range of $20,183 to $12,110 based on sensitivity analyses. Utility values ranged from $21,520 to $12,916/QALY and ERM recurrence rate increased to $16,846/QALY based on sensitivity analysis. ERM surgery is a very cost-effective procedure when compared with other interventions across medical subspecialties.

  19. Biotechnical use of polymerase chain reaction for microbiological analysis of biological samples.

    PubMed

    Lantz, P G; Abu al-Soud, W; Knutsson, R; Hahn-Hägerdal, B; Rådström, P

    2000-01-01

    Since its introduction in the mid-80s, polymerase chain reaction (PCR) technology has been recognised as a rapid, sensitive and specific molecular diagnostic tool for the analysis of micro-organisms in clinical, environmental and food samples. Although this technique can be extremely effective with pure solutions of nucleic acids, it's sensitivity may be reduced dramatically when applied directly to biological samples. This review describes PCR technology as a microbial detection method, PCR inhibitors in biological samples and various sample preparation techniques that can be used to facilitate PCR detection, by either separating the micro-organisms from PCR inhibitors and/or by concentrating the micro-organisms to detectable concentrations. Parts of this review are updated and based on a doctoral thesis by Lantz [1] and on a review discussing methods to overcome PCR inhibition in foods [2].

  20. Gold nanoparticle-sensitized quartz crystal microbalance sensor for rapid and highly selective determination of Cu(II) ions.

    PubMed

    Jin, Yulong; Huang, Yanyan; Liu, Guoquan; Zhao, Rui

    2013-09-21

    A novel quartz crystal microbalance (QCM) sensor for rapid, highly selective and sensitive detection of copper ions was developed. As a signal amplifier, gold nanoparticles (Au NPs) were self-assembled onto the surface of the sensor. A simple dip-and-dry method enabled the whole detection procedure to be accomplished within 20 min. High selectivity of the sensor towards copper ions is demonstrated by both individual and coexisting assays with interference ions. This gold nanoparticle mediated amplification allowed a detection limit down to 3.1 μM. Together with good repeatability and regeneration, the QCM sensor was also applied to the analysis of copper contamination in drinking water. This work provides a flexible method for fabricating QCM sensors for the analysis of important small molecules in environmental and biological samples.

  1. Numerical 3D flow simulation of attached cavitation structures at ultrasonic horn tips and statistical evaluation of flow aggressiveness via load collectives

    NASA Astrophysics Data System (ADS)

    Mottyll, S.; Skoda, R.

    2015-12-01

    A compressible inviscid flow solver with barotropic cavitation model is applied to two different ultrasonic horn set-ups and compared to hydrophone, shadowgraphy as well as erosion test data. The statistical analysis of single collapse events in wall-adjacent flow regions allows the determination of the flow aggressiveness via load collectives (cumulative event rate vs collapse pressure), which show an exponential decrease in agreement to studies on hydrodynamic cavitation [1]. A post-processing projection of event rate and collapse pressure on a reference grid reduces the grid dependency significantly. In order to evaluate the erosion-sensitive areas a statistical analysis of transient wall loads is utilised. Predicted erosion sensitive areas as well as temporal pressure and vapour volume evolution are in good agreement to the experimental data.

  2. Ionic solution and nanoparticle assisted MALDI-MS as bacterial biosensors for rapid analysis of yogurt.

    PubMed

    Lee, Chia-Hsun; Gopal, Judy; Wu, Hui-Fen

    2012-01-15

    Bacterial analysis from food samples is a highly challenging task because food samples contain intensive interferences from proteins and carbohydrates. Three different conditions of yogurt were analyzed: (1) the fresh yogurt immediately after purchasing, (2) the yogurt after expiry date stored in the refrigerator and (3) the yogurt left outside, without refrigeration. The shelf lives of both these yogurt was compared in terms of the decrease in bacterial signals. AB which initially contained 10(9) cells/mL drastically reduced to 10(7) cells/mL. However, Lin (Feng-Yin) yogurt which initially (fresh) had 10(8) cells/mL, even after two weeks beyond the expiry period showed no marked drop in bacterial count. Conventional MALDI-MS analysis showed limited sensitivity for analysis of yogurt bacteria amidst the complex milk proteins present in yogurt. A cost effective ionic solution, CrO(4)(2-) solution was used to enable the successful detection of bacterial signals (40-fold increased in sensitivity) selectively without the interference of the milk proteins. 0.035 mg of Ag nanoparticles (NPs) were also found to improve the detection of bacteria 2-6 times in yogurt samples. The current approach can be further applied as a rapid, sensitive and effective platform for bacterial analysis from food. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    USGS Publications Warehouse

    Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  4. Improved profiling of estrogen metabolites by orbitrap LC/MS

    PubMed Central

    Li, Xingnan; Franke, Adrian A.

    2015-01-01

    Estrogen metabolites are important biomarkers to evaluate cancer risks and metabolic diseases. Due to their low physiological levels, a sensitive and accurate method is required, especially for the quantitation of unconjugated forms of endogenous steroids and their metabolites in humans. Here, we evaluated various derivatives of estrogens for improved analysis by orbitrap LC/MS in human serum samples. A new chemical derivatization reagent was applied modifying phenolic steroids to form 1-methylimidazole-2-sulfonyl adducts. The method significantly improves the sensitivity 2–100 fold by full scan MS and targeted selected ion monitoring MS over other derivatization methods including, dansyl, picolinoyl, and pyridine-3-sulfonyl products. PMID:25543003

  5. Comparison between Deflection and Vibration Characteristics of Rectangular and Trapezoidal profile Microcantilevers

    PubMed Central

    Ansari, Mohd. Zahid; Cho, Chongdu; Kim, Jooyong; Bang, Booun

    2009-01-01

    Arrays of microcantilevers are increasingly being used as physical, biological, and chemical sensors in various applications. To improve the sensitivity of microcantilever sensors, this study analyses and compares the deflection and vibration characteristics of rectangular and trapezoidal profile microcantilevers. Three models of each profile are investigated. The cantilevers are analyzed for maximum deflection, fundamental resonant frequency and maximum stress. The surface stress is modelled as in-plane tensile force applied on the top edge of the microcantilevers. A commercial finite element analysis software ANSYS is used to analyze the designs. Results show paddled trapezoidal profile microcantilevers have better sensitivity. PMID:22574041

  6. Cost minimisation analysis of fingolimod vs natalizumab as a second line of treatment for relapsing-remitting multiple sclerosis.

    PubMed

    Crespo, C; Izquierdo, G; García-Ruiz, A; Granell, M; Brosa, M

    2014-05-01

    At present, there is a lack of economic assessments of second-line treatments for relapsing-recurring multiple sclerosis. The aim of this study was to compare the efficiency between fingolimod and natalizumab in Spain. A cost minimisation analysis model was developed for a 2-year horizon. The same relapse rate was applied to both treatment arms and the cost of resources was calculated using Spain's stipulated rates for 2012 in euros. The analysis was conducted from the perspective of Spain's national health system and an annual discount rate of 3% was applied to future costs. A sensitivity analysis was performed to validate the robustness of the model. Indirect comparison of fingolimod with natalizumab revealed no significant differences (hazard ratio between 0.82 and 1.07). The total direct cost, considering a 2-year analytical horizon, a 7.5% discount stipulated by Royal Decree, and a mean annual relapse rate of 0.22, was € 40914.72 for fingolimod and € 45890.53 for natalizumab. Of the total direct costs that were analysed, the maximum cost savings derived from prescribing fingolimod prescription was € 4363.63, corresponding to lower administration and treatment maintenance costs. Based on the sensitivity analysis performed, fingolimod use was associated with average savings of 11% (range 3.1%-18.7%). Fingolimod is more efficient than natalizumab as a second-line treatment option for relapsing-remitting multiple sclerosis and it generates savings for the Spanish national health system. Copyright © 2012 Sociedad Española de Neurología. Published by Elsevier Espana. All rights reserved.

  7. Investigating NWP initialization sensitivities in heavy precipitation events

    NASA Astrophysics Data System (ADS)

    Frediani, M. E. B.; Anagnostou, E. N.; Papadopoulos, A.

    2010-09-01

    This study aims to investigate the effect of different types of model initialization applied to extreme storms simulations. Storms with extreme precipitation can usually produce flash floods that cause several damages to the society. Lives and property are destroyed from the landslides when they could be speared if forecasted a few hours in advance. The forecasts depend on several factors; among them the initialization fields play an important role. These fields are the starting point for the simulation and therefore it controls the quality of the forecast. This study evaluates the sensitivities of WRF to the initialization from two perspectives, (1) resolution and (2) initial atmospheric fields. Two storms that lead to flash flood are simulated. The first one happened in Northeast Italy in 04/09/2009 (NI), and the second in Germany, in 02/06/2008 (GE). These storms present contrasting characteristics, NI was a maritime originated storm enhanced by local orography while GE was a typical summer convection. Three different sources of atmospheric fields defining the initial conditions are applied: (a) ECMWF operational analysis at resolution of 0.25 deg, (b) GFS operational analysis at 0.5deg and (c) LAPS analysis at ~15km, produced operationally at HCMR. The rainfall forecasted is compared against in situ ground radar and surface rain gauges observations through a set of quantitative precipitation forecast scores.

  8. Textural and Mineralogical Analysis of Volcanic Rocks by µ-XRF Mapping.

    PubMed

    Germinario, Luigi; Cossio, Roberto; Maritan, Lara; Borghi, Alessandro; Mazzoli, Claudio

    2016-06-01

    In this study, µ-XRF was applied as a novel surface technique for quick acquisition of elemental X-ray maps of rocks, image analysis of which provides quantitative information on texture and rock-forming minerals. Bench-top µ-XRF is cost-effective, fast, and non-destructive, can be applied to both large (up to a few tens of cm) and fragile samples, and yields major and trace element analysis with good sensitivity. Here, X-ray mapping was performed with a resolution of 103.5 µm and spot size of 30 µm over sample areas of about 5×4 cm of Euganean trachyte, a volcanic porphyritic rock from the Euganean Hills (NE Italy) traditionally used in cultural heritage. The relative abundance of phenocrysts and groundmass, as well as the size and shape of the various mineral phases, were obtained from image analysis of the elemental maps. The quantified petrographic features allowed identification of various extraction sites, revealing an objective method for archaeometric provenance studies exploiting µ-XRF imaging.

  9. Application of linear discriminant analysis and Attenuated Total Reflectance Fourier Transform Infrared microspectroscopy for diagnosis of colon cancer.

    PubMed

    Khanmohammadi, Mohammadreza; Bagheri Garmarudi, Amir; Samani, Simin; Ghasemi, Keyvan; Ashuri, Ahmad

    2011-06-01

    Attenuated Total Reflectance Fourier Transform Infrared (ATR-FTIR) microspectroscopy was applied for detection of colon cancer according to the spectral features of colon tissues. Supervised classification models can be trained to identify the tissue type based on the spectroscopic fingerprint. A total of 78 colon tissues were used in spectroscopy studies. Major spectral differences were observed in 1,740-900 cm(-1) spectral region. Several chemometric methods such as analysis of variance (ANOVA), cluster analysis (CA) and linear discriminate analysis (LDA) were applied for classification of IR spectra. Utilizing the chemometric techniques, clear and reproducible differences were observed between the spectra of normal and cancer cases, suggesting that infrared microspectroscopy in conjunction with spectral data processing would be useful for diagnostic classification. Using LDA technique, the spectra were classified into cancer and normal tissue classes with an accuracy of 95.8%. The sensitivity and specificity was 100 and 93.1%, respectively.

  10. Development of a PCR/LDR/capillary electrophoresis assay with potential for the detection of a beta-thalassemia fetal mutation in maternal plasma.

    PubMed

    Yi, Ping; Chen, Zhuqin; Yu, Lili; Zheng, Yingru; Liu, Guodong; Xie, Haichang; Zhou, Yuanguo; Zheng, Xiuhui; Han, Jian; Li, Li

    2010-08-01

    Analysis of fetal DNA in maternal plasma has recently been introduced for non-invasive prenatal diagnosis. We have now investigated the feasibility of polymerase chain reaction (PCR)/ligase detection reaction (LDR)/capillary electrophoresis for the detection of fetal point mutations, such as the beta-thalassemia mutation, IVS2 654(C --> T), in maternal plasma DNA. The sensitivity of LDR/capillary electrophoresis was examined by quantifying the mutant PCR products in the presence of a vast excess of non-mutant competitor template, a situation that mimics the detection of rare fetal mutations in the presence of excess maternal DNA. PCR/LDR/capillary electrophoresis was applied to detect the mutation, IVS2 654(C --> T), in an experimental model at different sensitivity levels and from 10 maternal plasma samples. Our results demonstrated that this approach to detect a low abundance IVS2 654(C --> T) mutation achieved a sensitivity of approximately 1:10,000. The approach was applied to maternal plasma DNA to detect the paternally inherited fetal IVS2 654(C --> T) mutation, and the results were equivalent to those obtained by PCR/reverse dot blot of amniotic fluid cell DNA. PCR/LDR/capillary electrophoresis has a very high sensitivity that can distinguish low abundance single nucleotide differences and can detect paternally inherited fetal point mutations in maternal plasma.

  11. Vestibular (dys)function in children with sensorineural hearing loss: a systematic review.

    PubMed

    Verbecque, Evi; Marijnissen, Tessa; De Belder, Niels; Van Rompaey, Vincent; Boudewyns, An; Van de Heyning, Paul; Vereeck, Luc; Hallemans, Ann

    2017-06-01

    The objective of this study is to provide an overview of the prevalence of vestibular dysfunction in children with SNHL classified according to the applied test and its corresponding sensitivity and specificity. Data were gathered using a systematic search query including reference screening. Pubmed, Web of Science and Embase were searched. Strategy and reporting of this review was based on the Meta-analysis of Observational Studies in Epidemiology (MOOSE) guidelines. Methodological quality was assessed with the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist. All studies, regardless the applied vestibular test, showed that vestibular function differs significantly between children with hearing loss and normal hearing (p < 0.05). Compared with caloric testing, the sensitivity of the Rotational Chair Test (RCT) varies between 61 and 80% and specificity between 21 and 80%, whereas this was, respectively, 71-100% and 30-100% for collic Vestibular Evoked Myogenic Potentials (cVEMP). Compared with RCT, the sensitivity was 88-100% and the specificity was 69-100% for the Dynamic Visual Acuity test, respectively, 67-100% and 71-100% for the (video) Head Impulse Test and 83% and 86% for the ocular VEMP. Currently, due to methodological shortcoming, evidence on sensitivity and specificity of vestibular tests is unknown to moderate. Future research should focus on adequate sample sizes (subgroups >30).

  12. Deconvolution improves the accuracy and depth sensitivity of time-resolved measurements

    NASA Astrophysics Data System (ADS)

    Diop, Mamadou; St. Lawrence, Keith

    2013-03-01

    Time-resolved (TR) techniques have the potential to distinguish early- from late-arriving photons. Since light travelling through superficial tissue is detected earlier than photons that penetrate the deeper layers, time-windowing can in principle be used to improve the depth sensitivity of TR measurements. However, TR measurements also contain instrument contributions - referred to as the instrument-response-function (IRF) - which cause temporal broadening of the measured temporal-point-spread-function (TPSF). In this report, we investigate the influence of the IRF on pathlength-resolved absorption changes (Δμa) retrieved from TR measurements using the microscopic Beer-Lambert law (MBLL). TPSFs were acquired on homogeneous and two-layer tissue-mimicking phantoms with varying optical properties. The measured IRF and TPSFs were deconvolved to recover the distribution of time-of-flights (DTOFs) of the detected photons. The microscopic Beer-Lambert law was applied to early and late time-windows of the TPSFs and DTOFs to access the effects of the IRF on pathlength-resolved Δμa. The analysis showed that the late part of the TPSFs contains substantial contributions from early-arriving photons, due to the smearing effects of the IRF, which reduced its sensitivity to absorption changes occurring in deep layers. We also demonstrated that the effects of the IRF can be efficiently eliminated by applying a robust deconvolution technique, thereby improving the accuracy and sensitivity of TR measurements to deep-tissue absorption changes.

  13. Enhancing coronary Wave Intensity Analysis robustness by high order central finite differences

    PubMed Central

    Rivolo, Simone; Asrress, Kaleab N.; Chiribiri, Amedeo; Sammut, Eva; Wesolowski, Roman; Bloch, Lars Ø.; Grøndal, Anne K.; Hønge, Jesper L.; Kim, Won Y.; Marber, Michael; Redwood, Simon; Nagel, Eike; Smith, Nicolas P.; Lee, Jack

    2014-01-01

    Background Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. Studies have identified WIA-derived indices that are closely correlated with several disease processes and predictive of functional recovery following myocardial infarction. The cWIA clinical application has, however, been limited by technical challenges including a lack of standardization across different studies and the derived indices' sensitivity to the processing parameters. Specifically, a critical step in WIA is the noise removal for evaluation of derivatives of the acquired signals, typically performed by applying a Savitzky–Golay filter, to reduce the high frequency acquisition noise. Methods The impact of the filter parameter selection on cWIA output, and on the derived clinical metrics (integral areas and peaks of the major waves), is first analysed. The sensitivity analysis is performed either by using the filter as a differentiator to calculate the signals' time derivative or by applying the filter to smooth the ensemble-averaged waveforms. Furthermore, the power-spectrum of the ensemble-averaged waveforms contains little high-frequency components, which motivated us to propose an alternative approach to compute the time derivatives of the acquired waveforms using a central finite difference scheme. Results and Conclusion The cWIA output and consequently the derived clinical metrics are significantly affected by the filter parameters, irrespective of its use as a smoothing filter or a differentiator. The proposed approach is parameter-free and, when applied to the 10 in-vivo human datasets and the 50 in-vivo animal datasets, enhances the cWIA robustness by significantly reducing the outcome variability (by 60%). PMID:25187852

  14. Sensitivity analysis of the DRAINWAT model applied to an agricultural watershed in the lower coastal plain, North Carolina, USA

    Treesearch

    Hyunwoo Kim; Devendra M. Amatya; Stephen W. Broome; Dean L. Hesterberg; Minha Choi

    2011-01-01

    The DRAINWAT, DRAINmod for WATershed model, was selected for hydrological modelling to obtain water table depths and drainage outflows at Open Grounds Farm in Carteret County, North Carolina, USA. Six simulated storm events from the study period were compared with the measured data and analysed. Simulation results from the whole study period and selected rainfall...

  15. Thermal loading in the laser holography nondestructive testing of a composite structure

    NASA Technical Reports Server (NTRS)

    Liu, H. K.; Kurtz, R. L.

    1975-01-01

    A laser holographic interferometry method that has variable sensitivity to surface deformation was applied to the investigation of composite test samples under thermal loading. A successful attempt was made to detect debonds in a fiberglass-epoxy-ceramic plate. Experimental results are presented along with the mathematical analysis of the physical model of the thermal loading and current conduction in the composite material.

  16. Sensitive zone parameters and curvature radius evaluation for polymer optical fiber curvature sensors

    NASA Astrophysics Data System (ADS)

    Leal-Junior, Arnaldo G.; Frizera, Anselmo; José Pontes, Maria

    2018-03-01

    Polymer optical fibers (POFs) are suitable for applications such as curvature sensors, strain, temperature, liquid level, among others. However, for enhancing sensitivity, many polymer optical fiber curvature sensors based on intensity variation require a lateral section. Lateral section length, depth, and surface roughness have great influence on the sensor sensitivity, hysteresis, and linearity. Moreover, the sensor curvature radius increase the stress on the fiber, which leads on variation of the sensor behavior. This paper presents the analysis relating the curvature radius and lateral section length, depth and surface roughness with the sensor sensitivity, hysteresis and linearity for a POF curvature sensor. Results show a strong correlation between the decision parameters behavior and the performance for sensor applications based on intensity variation. Furthermore, there is a trade-off among the sensitive zone length, depth, surface roughness, and curvature radius with the sensor desired performance parameters, which are minimum hysteresis, maximum sensitivity, and maximum linearity. The optimization of these parameters is applied to obtain a sensor with sensitivity of 20.9 mV/°, linearity of 0.9992 and hysteresis below 1%, which represent a better performance of the sensor when compared with the sensor without the optimization.

  17. Application of the adverse outcome pathway (AOP) concept to structure the available in vivo and in vitro mechanistic data for allergic sensitization to food proteins.

    PubMed

    van Bilsen, Jolanda H M; Sienkiewicz-Szłapka, Edyta; Lozano-Ojalvo, Daniel; Willemsen, Linette E M; Antunes, Celia M; Molina, Elena; Smit, Joost J; Wróblewska, Barbara; Wichers, Harry J; Knol, Edward F; Ladics, Gregory S; Pieters, Raymond H H; Denery-Papini, Sandra; Vissers, Yvonne M; Bavaro, Simona L; Larré, Colette; Verhoeckx, Kitty C M; Roggen, Erwin L

    2017-01-01

    The introduction of whole new foods in a population may lead to sensitization and food allergy. This constitutes a potential public health problem and a challenge to risk assessors and managers as the existing understanding of the pathophysiological processes and the currently available biological tools for prediction of the risk for food allergy development and the severity of the reaction are not sufficient. There is a substantial body of in vivo and in vitro data describing molecular and cellular events potentially involved in food sensitization. However, these events have not been organized in a sequence of related events that is plausible to result in sensitization, and useful to challenge current hypotheses. The aim of this manuscript was to collect and structure the current mechanistic understanding of sensitization induction to food proteins by applying the concept of adverse outcome pathway (AOP). The proposed AOP for food sensitization is based on information on molecular and cellular mechanisms and pathways evidenced to be involved in sensitization by food and food proteins and uses the AOPs for chemical skin sensitization and respiratory sensitization induction as templates. Available mechanistic data on protein respiratory sensitization were included to fill out gaps in the understanding of how proteins may affect cells, cell-cell interactions and tissue homeostasis. Analysis revealed several key events (KE) and biomarkers that may have potential use in testing and assessment of proteins for their sensitizing potential. The application of the AOP concept to structure mechanistic in vivo and in vitro knowledge has made it possible to identify a number of methods, each addressing a specific KE, that provide information about the food allergenic potential of new proteins. When applied in the context of an integrated strategy these methods may reduce, if not replace, current animal testing approaches. The proposed AOP will be shared at the www.aopwiki.org platform to expand the mechanistic data, improve the confidence in each of the proposed KE and key event relations (KERs), and allow for the identification of new, or refinement of established KE and KERs.

  18. The National Map: Benefits at what cost?

    USGS Publications Warehouse

    Halsing, D.L.; Theissen, K.M.; Bernknopf, R.L.

    2004-01-01

    The U.S. Geological Survey has conducted a cost-benefit analysis of The National Map, and determined that, during its 30-year projected lifespan, the project will likely bring a net present value of benefits to society of $2.05 billion. Such a survey enhances the United States' ability to access, integrate, and apply geospatial data at global, national, and local scales. This paper gives an overview on the underlying economic model for evaluating program benefits and presents the primary findings as well as a sensitivity analysis assessing the robustness of the results.

  19. Semi-micro high-performance liquid chromatographic analysis of tiropramide in human plasma using column-switching.

    PubMed

    Baek, Soo Kyoung; Lee, Seung Seok; Park, Eun Jeon; Sohn, Dong Hwan; Lee, Hye Suk

    2003-02-05

    A rapid and sensitive column-switching semi-micro high-performance liquid chromatography method was developed for the direct analysis of tiropramide in human plasma. The plasma sample (100 microl) was directly injected onto Capcell Pak MF Ph-1 precolumn where deproteinization and analyte fractionation occurred. Tiropramide was then eluted into an enrichment column (Capcell Pak UG C(18)) using acetonitrile-potassium phosphate (pH 7.0, 50 mM) (12:88, v/v) and was analyzed on a semi-micro C(18) analytical column using acetonitrile-potassium phosphate (pH 7.0, 10 mM) (50:50, v/v). The method showed excellent sensitivity (limit of quantification 5 ng/ml), and good precision (C.V.

  20. Relative performance of academic departments using DEA with sensitivity analysis.

    PubMed

    Tyagi, Preeti; Yadav, Shiv Prasad; Singh, S P

    2009-05-01

    The process of liberalization and globalization of Indian economy has brought new opportunities and challenges in all areas of human endeavor including education. Educational institutions have to adopt new strategies to make best use of the opportunities and counter the challenges. One of these challenges is how to assess the performance of academic programs based on multiple criteria. Keeping this in view, this paper attempts to evaluate the performance efficiencies of 19 academic departments of IIT Roorkee (India) through data envelopment analysis (DEA) technique. The technique has been used to assess the performance of academic institutions in a number of countries like USA, UK, Australia, etc. But we are using it first time in Indian context to the best of our knowledge. Applying DEA models, we calculate technical, pure technical and scale efficiencies and identify the reference sets for inefficient departments. Input and output projections are also suggested for inefficient departments to reach the frontier. Overall performance, research performance and teaching performance are assessed separately using sensitivity analysis.

  1. Analysis of imazaquin in soybeans by solid-phase extraction and high-performance liquid chromatography.

    PubMed

    Guo, C; Hu, J-Y; Chen, X-Y; Li, J-Z

    2008-02-01

    An analytical method for the determination imazaquin residues in soybeans was developed. The developed liquid/liquid partition and strong anion exchange solid-phase extraction procedures provide the effective cleanup, removing the greatest number of sample matrix interferences. By optimizing mobile-phase pH water/acetonitrile conditions with phosphoric acid, using a C-18 reverse-phase chromatographic column and employing ultraviolet detection, excellent peak resolution was achieved. The combined cleanup and chromatographic method steps reported herein were sensitive and reliable for determining the imazaquin residues in soybean samples. This method is characterized by recovery >88.4%, precision <6.7% CV, and sensitivity of 0.005 ppm, in agreement with directives for method validation in residue analysis. Imazaquin residues in soybeans were further confirmed by high performance liquid chromatography-mass spectrometry (LC-MS). The proposed method was successfully applied to the analysis of imazaquin residues in soybean samples grown in an experimental field after treatments of imazaquin formulation.

  2. A methodology for global-sensitivity analysis of time-dependent outputs in systems biology modelling.

    PubMed

    Sumner, T; Shephard, E; Bogle, I D L

    2012-09-07

    One of the main challenges in the development of mathematical and computational models of biological systems is the precise estimation of parameter values. Understanding the effects of uncertainties in parameter values on model behaviour is crucial to the successful use of these models. Global sensitivity analysis (SA) can be used to quantify the variability in model predictions resulting from the uncertainty in multiple parameters and to shed light on the biological mechanisms driving system behaviour. We present a new methodology for global SA in systems biology which is computationally efficient and can be used to identify the key parameters and their interactions which drive the dynamic behaviour of a complex biological model. The approach combines functional principal component analysis with established global SA techniques. The methodology is applied to a model of the insulin signalling pathway, defects of which are a major cause of type 2 diabetes and a number of key features of the system are identified.

  3. FURTHER ANALYSIS OF SUBTYPES OF AUTOMATICALLY REINFORCED SIB: A REPLICATION AND QUANTITATIVE ANALYSIS OF PUBLISHED DATASETS

    PubMed Central

    Hagopian, Louis P.; Rooker, Griffin W.; Zarcone, Jennifer R.; Bonner, Andrew C.; Arevalo, Alexander R.

    2017-01-01

    Hagopian, Rooker, and Zarcone (2015) evaluated a model for subtyping automatically reinforced self-injurious behavior (SIB) based on its sensitivity to changes in functional analysis conditions and the presence of self-restraint. The current study tested the generality of the model by applying it to all datasets of automatically reinforced SIB published from 1982 to 2015. We identified 49 datasets that included sufficient data to permit subtyping. Similar to the original study, Subtype-1 SIB was generally amenable to treatment using reinforcement alone, whereas Subtype-2 SIB was not. Conclusions could not be drawn about Subtype-3 SIB due to the small number of datasets. Nevertheless, the findings support the generality of the model and suggest that sensitivity of SIB to disruption by alternative reinforcement is an important dimension of automatically reinforced SIB. Findings also suggest that automatically reinforced SIB should no longer be considered a single category and that additional research is needed to better understand and treat Subtype-2 SIB. PMID:28032344

  4. Forest Disturbance Analysis with LANDSAT-8 Oli Data Related to a Parametric Wind Field: a Case Study for Typhoon Rammasun (201409)

    NASA Astrophysics Data System (ADS)

    Tan, C.; Fang, W.

    2018-04-01

    Forest disturbance induced by tropical cyclone often has significant and profound effects on the structure and function of forest ecosystem. Detection and analysis of post-disaster forest disturbance based on remote sensing technology has been widely applied. At present, it is necessary to conduct further quantitative analysis of the magnitude of forest disturbance with the intensity of typhoon. In this study, taking the case of super typhoon Rammasun (201409), we analysed the sensitivity of four common used remote sensing indices and explored the relationship between remote sensing index and corresponding wind speeds based on pre-and post- Landsat-8 OLI (Operational Land Imager) images and a parameterized wind field model. The results proved that NBR is the most sensitive index for the detection of forest disturbance induced by Typhoon Rammasun and the variation of NBR has a significant linear dependence relation with the simulated 3-second gust wind speed.

  5. Utilization of the ex vivo LLNA: BrdU-ELISA to distinguish the sensitizers from irritants in respect of 3 end points-lymphocyte proliferation, ear swelling, and cytokine profiles.

    PubMed

    Arancioglu, Seren; Ulker, Ozge Cemiloglu; Karakaya, Asuman

    2015-01-01

    Dermal exposure to chemicals may result in allergic or irritant contact dermatitis. In this study, we performed ex vivo local lymph node assay: bromodeoxyuridine-enzyme-linked immunosorbent assay (LLNA: BrdU-ELISA) to compare the differences between irritation and sensitization potency of some chemicals in terms of the 3 end points: lymphocyte proliferation, cytokine profiles (interleukin 2 [IL-2], interferon-γ (IFN-γ), IL-4, IL-5, IL-1, and tumor necrosis factor α [TNF-α]), and ear swelling. Different concentrations of the following well-known sensitizers and irritant chemicals were applied to mice: dinitrochlorobenzene, eugenol, isoeugenol, sodium lauryl sulfate (SLS), and croton oil. According to the lymph node results; the auricular lymph node weights and lymph node cell counts increased after application of both sensitizers and irritants in high concentrations. On the other hand, according to lymph node cell proliferation results, there was a 3-fold increase in proliferation of lymph node cells (stimulation index) for sensitizer chemicals and SLS in the applied concentrations; however, there was not a 3-fold increase for croton oil and negative control. The SLS gave a false-positive response. Cytokine analysis demonstrated that 4 cytokines including IL-2, IFN-γ, IL-4, and IL-5 were released in lymph node cell cultures, with a clear dose trend for sensitizers whereas only TNF-α was released in response to irritants. Taken together, our results suggest that the ex vivo LLNA: BrdU-ELISA method can be useful for discriminating irritants and allergens. © The Author(s) 2015.

  6. Sensitivity analysis in practice: providing an uncertainty budget when applying supplement 1 to the GUM

    NASA Astrophysics Data System (ADS)

    Allard, Alexandre; Fischer, Nicolas

    2018-06-01

    Sensitivity analysis associated with the evaluation of measurement uncertainty is a very important tool for the metrologist, enabling them to provide an uncertainty budget and to gain a better understanding of the measurand and the underlying measurement process. Using the GUM uncertainty framework, the contribution of an input quantity to the variance of the output quantity is obtained through so-called ‘sensitivity coefficients’. In contrast, such coefficients are no longer computed in cases where a Monte-Carlo method is used. In such a case, supplement 1 to the GUM suggests varying the input quantities one at a time, which is not an efficient method and may provide incorrect contributions to the variance in cases where significant interactions arise. This paper proposes different methods for the elaboration of the uncertainty budget associated with a Monte Carlo method. An application to the mass calibration example described in supplement 1 to the GUM is performed with the corresponding R code for implementation. Finally, guidance is given for choosing a method, including suggestions for a future revision of supplement 1 to the GUM.

  7. A GC-MS method for the detection and quantitation of ten major drugs of abuse in human hair samples.

    PubMed

    Orfanidis, A; Mastrogianni, O; Koukou, A; Psarros, G; Gika, H; Theodoridis, G; Raikos, N

    2017-03-15

    A sensitive analytical method has been developed in order to identify and quantify major drugs of abuse (DOA), namely morphine, codeine, 6-monoacetylmorphine, cocaine, ecgonine methyl ester, benzoylecgonine, amphetamine, methamphetamine, methylenedioxymethamphetamine and methylenedioxyamphetamine in human hair. Samples of hair were extracted with methanol under ultrasonication at 50°C after a three step rinsing process to remove external contamination and dirt hair. Derivatization with BSTFA was selected in order to increase detection sensitivity of GC/MS analysis. Optimization of derivatization parameters was based on experiments for the selection of derivatization time, temperature and volume of derivatising agent. Validation of the method included evaluation of linearity which ranged from 2 to 350ng/mg of hair mean concentration for all DOA, evaluation of sensitivity, accuracy, precision and repeatability. Limits of detection ranged from 0.05 to 0.46ng/mg of hair. The developed method was applied for the analysis of hair samples obtained from three human subjects and were found positive in cocaine, and opiates. Published by Elsevier B.V.

  8. A general method for handling missing binary outcome data in randomized controlled trials

    PubMed Central

    Jackson, Dan; White, Ian R; Mason, Dan; Sutton, Stephen

    2014-01-01

    Aims The analysis of randomized controlled trials with incomplete binary outcome data is challenging. We develop a general method for exploring the impact of missing data in such trials, with a focus on abstinence outcomes. Design We propose a sensitivity analysis where standard analyses, which could include ‘missing = smoking’ and ‘last observation carried forward’, are embedded in a wider class of models. Setting We apply our general method to data from two smoking cessation trials. Participants A total of 489 and 1758 participants from two smoking cessation trials. Measurements The abstinence outcomes were obtained using telephone interviews. Findings The estimated intervention effects from both trials depend on the sensitivity parameters used. The findings differ considerably in magnitude and statistical significance under quite extreme assumptions about the missing data, but are reasonably consistent under more moderate assumptions. Conclusions A new method for undertaking sensitivity analyses when handling missing data in trials with binary outcomes allows a wide range of assumptions about the missing data to be assessed. In two smoking cessation trials the results were insensitive to all but extreme assumptions. PMID:25171441

  9. Elastic critical moment for bisymmetric steel profiles and its sensitivity by the finite difference method

    NASA Astrophysics Data System (ADS)

    Kamiński, M.; Supeł, Ł.

    2016-02-01

    It is widely known that lateral-torsional buckling of a member under bending and warping restraints of its cross-sections in the steel structures are crucial for estimation of their safety and durability. Although engineering codes for steel and aluminum structures support the designer with the additional analytical expressions depending even on the boundary conditions and internal forces diagrams, one may apply alternatively the traditional Finite Element or Finite Difference Methods (FEM, FDM) to determine the so-called critical moment representing this phenomenon. The principal purpose of this work is to compare three different ways of determination of critical moment, also in the context of structural sensitivity analysis with respect to the structural element length. Sensitivity gradients are determined by the use of both analytical and the central finite difference scheme here and contrasted also for analytical, FEM as well as FDM approaches. Computational study is provided for the entire family of the steel I- and H - beams available for the practitioners in this area, and is a basis for further stochastic reliability analysis as well as durability prediction including possible corrosion progress.

  10. Regionalising MUSLE factors for application to a data-scarce catchment

    NASA Astrophysics Data System (ADS)

    Gwapedza, David; Slaughter, Andrew; Hughes, Denis; Mantel, Sukhmani

    2018-04-01

    The estimation of soil loss and sediment transport is important for effective management of catchments. A model for semi-arid catchments in southern Africa has been developed; however, simplification of the model parameters and further testing are required. Soil loss is calculated through the Modified Universal Soil Loss Equation (MUSLE). The aims of the current study were to: (1) regionalise the MUSLE erodibility factors and; (2) perform a sensitivity analysis and validate the soil loss outputs against independently-estimated measures. The regionalisation was developed using Geographic Information Systems (GIS) coverages. The model was applied to a high erosion semi-arid region in the Eastern Cape, South Africa. Sensitivity analysis indicated model outputs to be more sensitive to the vegetation cover factor. The simulated soil loss estimates of 40 t ha-1 yr-1 were within the range of estimates by previous studies. The outcome of the present research is a framework for parameter estimation for the MUSLE through regionalisation. This is part of the ongoing development of a model which can estimate soil loss and sediment delivery at broad spatial and temporal scales.

  11. A pattern-mixture model approach for handling missing continuous outcome data in longitudinal cluster randomized trials.

    PubMed

    Fiero, Mallorie H; Hsu, Chiu-Hsieh; Bell, Melanie L

    2017-11-20

    We extend the pattern-mixture approach to handle missing continuous outcome data in longitudinal cluster randomized trials, which randomize groups of individuals to treatment arms, rather than the individuals themselves. Individuals who drop out at the same time point are grouped into the same dropout pattern. We approach extrapolation of the pattern-mixture model by applying multilevel multiple imputation, which imputes missing values while appropriately accounting for the hierarchical data structure found in cluster randomized trials. To assess parameters of interest under various missing data assumptions, imputed values are multiplied by a sensitivity parameter, k, which increases or decreases imputed values. Using simulated data, we show that estimates of parameters of interest can vary widely under differing missing data assumptions. We conduct a sensitivity analysis using real data from a cluster randomized trial by increasing k until the treatment effect inference changes. By performing a sensitivity analysis for missing data, researchers can assess whether certain missing data assumptions are reasonable for their cluster randomized trial. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Multi-scale Modeling of the Impact Response of a Strain Rate Sensitive High-Manganese Austenitic Steel

    NASA Astrophysics Data System (ADS)

    Önal, Orkun; Ozmenci, Cemre; Canadinc, Demircan

    2014-09-01

    A multi-scale modeling approach was applied to predict the impact response of a strain rate sensitive high-manganese austenitic steel. The roles of texture, geometry and strain rate sensitivity were successfully taken into account all at once by coupling crystal plasticity and finite element (FE) analysis. Specifically, crystal plasticity was utilized to obtain the multi-axial flow rule at different strain rates based on the experimental deformation response under uniaxial tensile loading. The equivalent stress - equivalent strain response was then incorporated into the FE model for the sake of a more representative hardening rule under impact loading. The current results demonstrate that reliable predictions can be obtained by proper coupling of crystal plasticity and FE analysis even if the experimental flow rule of the material is acquired under uniaxial loading and at moderate strain rates that are significantly slower than those attained during impact loading. Furthermore, the current findings also demonstrate the need for an experiment-based multi-scale modeling approach for the sake of reliable predictions of the impact response.

  13. Emulation and Sobol' sensitivity analysis of an atmospheric dispersion model applied to the Fukushima nuclear accident

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Mallet, Vivien; Korsakissok, Irène; Mathieu, Anne

    2016-04-01

    Simulations of the atmospheric dispersion of radionuclides involve large uncertainties originating from the limited knowledge of meteorological input data, composition, amount and timing of emissions, and some model parameters. The estimation of these uncertainties is an essential complement to modeling for decision making in case of an accidental release. We have studied the relative influence of a set of uncertain inputs on several outputs from the Eulerian model Polyphemus/Polair3D on the Fukushima case. We chose to use the variance-based sensitivity analysis method of Sobol'. This method requires a large number of model evaluations which was not achievable directly due to the high computational cost of Polyphemus/Polair3D. To circumvent this issue, we built a mathematical approximation of the model using Gaussian process emulation. We observed that aggregated outputs are mainly driven by the amount of emitted radionuclides, while local outputs are mostly sensitive to wind perturbations. The release height is notably influential, but only in the vicinity of the source. Finally, averaging either spatially or temporally tends to cancel out interactions between uncertain inputs.

  14. High order statistical signatures from source-driven measurements of subcritical fissile systems

    NASA Astrophysics Data System (ADS)

    Mattingly, John Kelly

    1998-11-01

    This research focuses on the development and application of high order statistical analyses applied to measurements performed with subcritical fissile systems driven by an introduced neutron source. The signatures presented are derived from counting statistics of the introduced source and radiation detectors that observe the response of the fissile system. It is demonstrated that successively higher order counting statistics possess progressively higher sensitivity to reactivity. Consequently, these signatures are more sensitive to changes in the composition, fissile mass, and configuration of the fissile assembly. Furthermore, it is shown that these techniques are capable of distinguishing the response of the fissile system to the introduced source from its response to any internal or inherent sources. This ability combined with the enhanced sensitivity of higher order signatures indicates that these techniques will be of significant utility in a variety of applications. Potential applications include enhanced radiation signature identification of weapons components for nuclear disarmament and safeguards applications and augmented nondestructive analysis of spent nuclear fuel. In general, these techniques expand present capabilities in the analysis of subcritical measurements.

  15. Sensitivity of Rabbit Ventricular Action Potential and Ca2+ Dynamics to Small Variations in Membrane Currents and Ion Diffusion Coefficients

    PubMed Central

    Lo, Yuan Hung; Peachey, Tom; Abramson, David; McCulloch, Andrew

    2013-01-01

    Little is known about how small variations in ionic currents and Ca2+ and Na+ diffusion coefficients impact action potential and Ca2+ dynamics in rabbit ventricular myocytes. We applied sensitivity analysis to quantify the sensitivity of Shannon et al. model (Biophys. J., 2004) to 5%–10% changes in currents conductance, channels distribution, and ion diffusion in rabbit ventricular cells. We found that action potential duration and Ca2+ peaks are highly sensitive to 10% increase in L-type Ca2+ current; moderately influenced by 10% increase in Na+-Ca2+ exchanger, Na+-K+ pump, rapid delayed and slow transient outward K+ currents, and Cl− background current; insensitive to 10% increases in all other ionic currents and sarcoplasmic reticulum Ca2+ fluxes. Cell electrical activity is strongly affected by 5% shift of L-type Ca2+ channels and Na+-Ca2+ exchanger in between junctional and submembrane spaces while Ca2+-activated Cl−-channel redistribution has the modest effect. Small changes in submembrane and cytosolic diffusion coefficients for Ca2+, but not in Na+ transfer, may alter notably myocyte contraction. Our studies highlight the need for more precise measurements and further extending and testing of the Shannon et al. model. Our results demonstrate usefulness of sensitivity analysis to identify specific knowledge gaps and controversies related to ventricular cell electrophysiology and Ca2+ signaling. PMID:24222910

  16. Sensitive analytical method for simultaneous analysis of some vasoconstrictors with highly overlapped analytical signals

    NASA Astrophysics Data System (ADS)

    Nikolić, G. S.; Žerajić, S.; Cakić, M.

    2011-10-01

    Multivariate calibration method is a powerful mathematical tool that can be applied in analytical chemistry when the analytical signals are highly overlapped. The method with regression by partial least squares is proposed for the simultaneous spectrophotometric determination of adrenergic vasoconstrictors in decongestive solution containing two active components: phenyleprine hydrochloride and trimazoline hydrochloride. These sympathomimetic agents are that frequently associated in pharmaceutical formulations against the common cold. The proposed method, which is, simple and rapid, offers the advantages of sensitivity and wide range of determinations without the need for extraction of the vasoconstrictors. In order to minimize the optimal factors necessary to obtain the calibration matrix by multivariate calibration, different parameters were evaluated. The adequate selection of the spectral regions proved to be important on the number of factors. In order to simultaneously quantify both hydrochlorides among excipients, the spectral region between 250 and 290 nm was selected. A recovery for the vasoconstrictor was 98-101%. The developed method was applied to assay of two decongestive pharmaceutical preparations.

  17. Applications of automatic differentiation in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Carle, A.; Bischof, C.; Haigler, Kara J.; Newman, Perry A.

    1994-01-01

    Automatic differentiation (AD) is a powerful computational method that provides for computing exact sensitivity derivatives (SD) from existing computer programs for multidisciplinary design optimization (MDO) or in sensitivity analysis. A pre-compiler AD tool for FORTRAN programs called ADIFOR has been developed. The ADIFOR tool has been easily and quickly applied by NASA Langley researchers to assess the feasibility and computational impact of AD in MDO with several different FORTRAN programs. These include a state-of-the-art three dimensional multigrid Navier-Stokes flow solver for wings or aircraft configurations in transonic turbulent flow. With ADIFOR the user specifies sets of independent and dependent variables with an existing computer code. ADIFOR then traces the dependency path throughout the code, applies the chain rule to formulate derivative expressions, and generates new code to compute the required SD matrix. The resulting codes have been verified to compute exact non-geometric and geometric SD for a variety of cases. in less time than is required to compute the SD matrix using centered divided differences.

  18. HERBICIDE SENSITIVITY OF ECHINOCHLOA CRUS-GALLI POPULATIONS: A COMPARISON BETWEEN CROPPING SYSTEMS.

    PubMed

    Claerhout, S; De Cauwer, B; Reheul, D

    2014-01-01

    Echinochloa crus-galli populations exhibit high morphological variability and their response to herbicides varies from field to field. Differential response to herbicides could reflect differences in selection pressure, caused by years of cropping system related herbicide usage. This study investigates the relation between herbicide sensitivity of Echinochloa crus-galli populations and the cropping system to which they were subjected. The herbicide sensitivity of Echinochloa crus-galli was evaluated for populations collected on 18 fields, representing three cropping systems, namely (1) a long-term organic cropping system, (2) a conventional cropping system with corn in crop rotation or (3) a conventional cropping system with long-term monoculture of corn. Each cropping system was represented by 6 E. crus-galli populations. All fields were located on sandy soils. Dose-response pot experiments were conducted in the greenhouse to assess the effectiveness of three foliar-applied corn herbicides: nicosulfuron (ALS-inhibitor), cycloxydim (ACCase-inhibitor) and topramezone (HPPD-inhibitor), and two soil-applied corn herbicides: S-metolachlor and dimethenamid-P (both VLCFA-inhibitors). Foliar-applied herbicides were tested at a quarter, half and full recommended doses. Soil-applied herbicides were tested within a dose range of 0-22.5 g a.i. ha(-1) for S-metolachlor and 0-45 g a.i. ha(-1) for dimethenamid-P. Foliar-applied herbicides were applied at the three true leaves stage. Soil-applied herbicides were treated immediately after sowing the radicle-emerged seeds. All experiments were performed twice. The foliage dry weight per pot was determined four weeks after treatment. Plant responses to herbicides were expressed as biomass reduction (%, relative to the untreated control). Sensitivity to foliar-applied herbicides varied among cropping systems. Compared to populations from monoculture corn fields, populations originating from organic fields were significantly more sensitive to cycloxydim, topramezone and nicosulfuron (resp. 5.3%, 5.9% and 12.3%). Populations from the conventional crop rotation system showed intermediate sensitivity levels. Contrary to foliar-applied herbicides, the effectiveness of soil-applied herbicides was not affected by cropping system. Integrated weed management may be necessary to preserve herbicide efficacy on the long term.

  19. Second-Order Sensitivity Analysis of Uncollided Particle Contributions to Radiation Detector Responses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cacuci, Dan G.; Favorite, Jeffrey A.

    This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less

  20. Using sensitivity analysis in model calibration efforts

    USGS Publications Warehouse

    Tiedeman, Claire; Hill, Mary C.

    2003-01-01

    In models of natural and engineered systems, sensitivity analysis can be used to assess relations among system state observations, model parameters, and model predictions. The model itself links these three entities, and model sensitivities can be used to quantify the links. Sensitivities are defined as the derivatives of simulated quantities (such as simulated equivalents of observations, or model predictions) with respect to model parameters. We present four measures calculated from model sensitivities that quantify the observation-parameter-prediction links and that are especially useful during the calibration and prediction phases of modeling. These four measures are composite scaled sensitivities (CSS), prediction scaled sensitivities (PSS), the value of improved information (VOII) statistic, and the observation prediction (OPR) statistic. These measures can be used to help guide initial calibration of models, collection of field data beneficial to model predictions, and recalibration of models updated with new field information. Once model sensitivities have been calculated, each of the four measures requires minimal computational effort. We apply the four measures to a three-layer MODFLOW-2000 (Harbaugh et al., 2000; Hill et al., 2000) model of the Death Valley regional ground-water flow system (DVRFS), located in southern Nevada and California. D’Agnese et al. (1997, 1999) developed and calibrated the model using nonlinear regression methods. Figure 1 shows some of the observations, parameters, and predictions for the DVRFS model. Observed quantities include hydraulic heads and spring flows. The 23 defined model parameters include hydraulic conductivities, vertical anisotropies, recharge rates, evapotranspiration rates, and pumpage. Predictions of interest for this regional-scale model are advective transport paths from potential contamination sites underlying the Nevada Test Site and Yucca Mountain.

  1. Second-Order Sensitivity Analysis of Uncollided Particle Contributions to Radiation Detector Responses

    DOE PAGES

    Cacuci, Dan G.; Favorite, Jeffrey A.

    2018-04-06

    This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less

  2. Using Real-time Event Tracking Sensitivity Analysis to Overcome Sensor Measurement Uncertainties of Geo-Information Management in Drilling Disasters

    NASA Astrophysics Data System (ADS)

    Tavakoli, S.; Poslad, S.; Fruhwirth, R.; Winter, M.

    2012-04-01

    This paper introduces an application of a novel EventTracker platform for instantaneous Sensitivity Analysis (SA) of large scale real-time geo-information. Earth disaster management systems demand high quality information to aid a quick and timely response to their evolving environments. The idea behind the proposed EventTracker platform is the assumption that modern information management systems are able to capture data in real-time and have the technological flexibility to adjust their services to work with specific sources of data/information. However, to assure this adaptation in real time, the online data should be collected, interpreted, and translated into corrective actions in a concise and timely manner. This can hardly be handled by existing sensitivity analysis methods because they rely on historical data and lazy processing algorithms. In event-driven systems, the effect of system inputs on its state is of value, as events could cause this state to change. This 'event triggering' situation underpins the logic of the proposed approach. Event tracking sensitivity analysis method describes the system variables and states as a collection of events. The higher the occurrence of an input variable during the trigger of event, the greater its potential impact will be on the final analysis of the system state. Experiments were designed to compare the proposed event tracking sensitivity analysis with existing Entropy-based sensitivity analysis methods. The results have shown a 10% improvement in a computational efficiency with no compromise for accuracy. It has also shown that the computational time to perform the sensitivity analysis is 0.5% of the time required compared to using the Entropy-based method. The proposed method has been applied to real world data in the context of preventing emerging crises at drilling rigs. One of the major purposes of such rigs is to drill boreholes to explore oil or gas reservoirs with the final scope of recovering the content of such reservoirs; both in onshore regions as well as in offshore regions. Drilling a well is always guided by technical, economic and security constraints to prevent crew, equipment and environment from injury, damage and pollution. Although risk assessment and local practice provides a high degree of security, uncertainty is given by the behaviour of the formation which may cause crucial situations at the rig. To overcome such uncertainties real-time sensor measurements form a base to predict and thus prevent such crises, the proposed method supports the identification of the data necessary for that.

  3. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  4. Quantitative sensory testing response patterns to capsaicin- and ultraviolet-B–induced local skin hypersensitization in healthy subjects: a machine-learned analysis

    PubMed Central

    Lötsch, Jörn; Geisslinger, Gerd; Heinemann, Sarah; Lerch, Florian; Oertel, Bruno G.; Ultsch, Alfred

    2018-01-01

    Abstract The comprehensive assessment of pain-related human phenotypes requires combinations of nociceptive measures that produce complex high-dimensional data, posing challenges to bioinformatic analysis. In this study, we assessed established experimental models of heat hyperalgesia of the skin, consisting of local ultraviolet-B (UV-B) irradiation or capsaicin application, in 82 healthy subjects using a variety of noxious stimuli. We extended the original heat stimulation by applying cold and mechanical stimuli and assessing the hypersensitization effects with a clinically established quantitative sensory testing (QST) battery (German Research Network on Neuropathic Pain). This study provided a 246 × 10-sized data matrix (82 subjects assessed at baseline, following UV-B application, and following capsaicin application) with respect to 10 QST parameters, which we analyzed using machine-learning techniques. We observed statistically significant effects of the hypersensitization treatments in 9 different QST parameters. Supervised machine-learned analysis implemented as random forests followed by ABC analysis pointed to heat pain thresholds as the most relevantly affected QST parameter. However, decision tree analysis indicated that UV-B additionally modulated sensitivity to cold. Unsupervised machine-learning techniques, implemented as emergent self-organizing maps, hinted at subgroups responding to topical application of capsaicin. The distinction among subgroups was based on sensitivity to pressure pain, which could be attributed to sex differences, with women being more sensitive than men. Thus, while UV-B and capsaicin share a major component of heat pain sensitization, they differ in their effects on QST parameter patterns in healthy subjects, suggesting a lack of redundancy between these models. PMID:28700537

  5. Towards simplification of hydrologic modeling: Identification of dominant processes

    USGS Publications Warehouse

    Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.

    2016-01-01

    The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many

  6. Quantitative relationship between the local lymph node assay and human skin sensitization assays.

    PubMed

    Schneider, K; Akkan, Z

    2004-06-01

    The local lymph node assay (LLNA) is a new test method which allows for the quantitative assessment of sensitizing potency in the mouse. Here, we investigate the quantitative correlation between results from the LLNA and two human sensitization tests--specifically, human repeat insult patch tests (HRIPTs) and human maximization tests (HMTs). Data for 57 substances were evaluated, of which 46 showed skin sensitizing properties in human tests, whereas 11 yielded negative results in humans. For better comparability data from mouse and human tests were transformed to applied doses per skin area, which ranged over four orders of magnitude for the substances considered. Regression analysis for the 46 human sensitizing substances revealed a significant positive correlation between the LLNA and human tests. The correlation was better between LLNA and HRIPT data (n=23; r=0.77) than between LLNA and HMT data (n=38; r=0.65). The observed scattering of data points is related to various uncertainties, in part associated with insufficiencies of data from older HMT studies. Predominantly negative results in the LLNA for another 11 substances which showed no skin sensitizing activity in human maximization tests further corroborate the correspondence between LLNA and human tests. Based on this analysis, the LLNA can be considered a reliable basis for relative potency assessments for skin sensitizers. Proposals are made for the regulatory exploitation of the LLNA: four potency groups can be established, and assignment of substances to these groups according to the outcome of the LLNA can be used to characterize skin sensitizing potency in substance-specific assessments. Moreover, based on these potency groups, a more adequate consideration of sensitizing substances in preparations becomes possible. It is proposed to replace the current single concentration limit for skin sensitizers in preparations, which leads to an all or nothing classification of a preparation as sensitizing to skin ("R43") in the European Union, by differentiated concentration limits derived from the limits for the four potency groups.

  7. Component Analysis of Remanent Magnetization Curves: A Revisit with a New Model Distribution

    NASA Astrophysics Data System (ADS)

    Zhao, X.; Suganuma, Y.; Fujii, M.

    2017-12-01

    Geological samples often consist of several magnetic components that have distinct origins. As the magnetic components are often indicative of their underlying geological and environmental processes, it is therefore desirable to identify individual components to extract associated information. This component analysis can be achieved using the so-called unmixing method, which fits a mixture model of certain end-member model distribution to the measured remanent magnetization curve. In earlier studies, the lognormal, skew generalized Gaussian and skewed Gaussian distributions have been used as the end-member model distribution in previous studies, which are performed on the gradient curve of remanent magnetization curves. However, gradient curves are sensitive to measurement noise as the differentiation of the measured curve amplifies noise, which could deteriorate the component analysis. Though either smoothing or filtering can be applied to reduce the noise before differentiation, their effect on biasing component analysis is vaguely addressed. In this study, we investigated a new model function that can be directly applied to the remanent magnetization curves and therefore avoid the differentiation. The new model function can provide more flexible shape than the lognormal distribution, which is a merit for modeling the coercivity distribution of complex magnetic component. We applied the unmixing method both to model and measured data, and compared the results with those obtained using other model distributions to better understand their interchangeability, applicability and limitation. The analyses on model data suggest that unmixing methods are inherently sensitive to noise, especially when the number of component is over two. It is, therefore, recommended to verify the reliability of component analysis by running multiple analyses with synthetic noise. Marine sediments and seafloor rocks are analyzed with the new model distribution. Given the same component number, the new model distribution can provide closer fits than the lognormal distribution evidenced by reduced residuals. Moreover, the new unmixing protocol is automated so that the users are freed from the labor of providing initial guesses for the parameters, which is also helpful to improve the subjectivity of component analysis.

  8. A design automation framework for computational bioenergetics in biological networks.

    PubMed

    Angione, Claudio; Costanza, Jole; Carapezza, Giovanni; Lió, Pietro; Nicosia, Giuseppe

    2013-10-01

    The bioenergetic activity of mitochondria can be thoroughly investigated by using computational methods. In particular, in our work we focus on ATP and NADH, namely the metabolites representing the production of energy in the cell. We develop a computational framework to perform an exhaustive investigation at the level of species, reactions, genes and metabolic pathways. The framework integrates several methods implementing the state-of-the-art algorithms for many-objective optimization, sensitivity, and identifiability analysis applied to biological systems. We use this computational framework to analyze three case studies related to the human mitochondria and the algal metabolism of Chlamydomonas reinhardtii, formally described with algebraic differential equations or flux balance analysis. Integrating the results of our framework applied to interacting organelles would provide a general-purpose method for assessing the production of energy in a biological network.

  9. GIS-Based Suitability Model for Assessment of Forest Biomass Energy Potential in a Region of Portugal

    NASA Astrophysics Data System (ADS)

    Quinta-Nova, Luis; Fernandez, Paulo; Pedro, Nuno

    2017-12-01

    This work focuses on developed a decision support system based on multicriteria spatial analysis to assess the potential for generation of biomass residues from forestry sources in a region of Portugal (Beira Baixa). A set of environmental, economic and social criteria was defined, evaluated and weighted in the context of Saaty’s analytic hierarchies. The best alternatives were obtained after applying Analytic Hierarchy Process (AHP). The model was applied to the central region of Portugal where forest and agriculture are the most representative land uses. Finally, sensitivity analysis of the set of factors and their associated weights was performed to test the robustness of the model. The proposed evaluation model provides a valuable reference for decision makers in establishing a standardized means of selecting the optimal location for new biomass plants.

  10. Array-on-a-disk? How Blu-ray technology can be applied to molecular diagnostics.

    PubMed

    Morais, Sergi; Tortajada-Genaro, Luis; Maquieira, Angel

    2014-09-01

    This editorial comments on the balance and perspectives of compact disk technology applied to molecular diagnostics. The development of sensitive, rapid and multiplex assays using Blu-ray technology for the determination of biomarkers, drug allergens, pathogens and detection of infections would have a direct impact on diagnostics. Effective tests for use in clinical, environmental and food applications require versatile and low-cost platforms as well as cost-effective detectors. Blu-ray technology accomplishes those requirements and advances on the concept of high density arrays for massive screening to achieve the demands of point of care or in situ analysis.

  11. Crowdsourcing and Automated Retinal Image Analysis for Diabetic Retinopathy.

    PubMed

    Mudie, Lucy I; Wang, Xueyang; Friedman, David S; Brady, Christopher J

    2017-09-23

    As the number of people with diabetic retinopathy (DR) in the USA is expected to increase threefold by 2050, the need to reduce health care costs associated with screening for this treatable disease is ever present. Crowdsourcing and automated retinal image analysis (ARIA) are two areas where new technology has been applied to reduce costs in screening for DR. This paper reviews the current literature surrounding these new technologies. Crowdsourcing has high sensitivity for normal vs abnormal images; however, when multiple categories for severity of DR are added, specificity is reduced. ARIAs have higher sensitivity and specificity, and some commercial ARIA programs are already in use. Deep learning enhanced ARIAs appear to offer even more improvement in ARIA grading accuracy. The utilization of crowdsourcing and ARIAs may be a key to reducing the time and cost burden of processing images from DR screening.

  12. Simultaneous Determination of Eight Hypotensive Drugs of Various Chemical Groups in Pharmaceutical Preparations by HPLC-DAD.

    PubMed

    Stolarczyk, Mariusz; Hubicka, Urszula; Żuromska-Witek, Barbara; Krzek, Jan

    2015-01-01

    A new sensitive, simple, rapid, and precise HPLC method with diode array detection has been developed for separation and simultaneous determination of hydrochlorothiazide, furosemide, torasemide, losartane, quinapril, valsartan, spironolactone, and canrenone in combined pharmaceutical dosage forms. The chromatographic analysis of the tested drugs was performed on an ACE C18, 100 Å, 250×4.6 mm, 5 μm particle size column with 0.0.05 M phosphate buffer (pH=3.00)-acetonitrile-methanol (30+20+50 v/v/v) mobile phase at a flow rate of 1.0 mL/min. The column was thermostatted at 25°C. UV detection was performed at 230 nm. Analysis time was 10 min. The elaborated method meets the acceptance criteria for specificity, linearity, sensitivity, accuracy, and precision. The proposed method was successfully applied for the determination of the studied drugs in the selected combined dosage forms.

  13. Economic assessments of small-scale drinking-water interventions in pursuit of MDG target 7C.

    PubMed

    Cameron, John; Jagals, Paul; Hunter, Paul R; Pedley, Steve; Pond, Katherine

    2011-12-01

    This paper uses an applied rural case study of a safer water intervention in South Africa to illustrate how three levels of economic assessment can be used to understand the impact of the intervention on people's well-being. It is set in the context of Millennium Development Goal 7 which sets a target (7C) for safe drinking-water provision and the challenges of reaching people in remote rural areas with relatively small-scale schemes. The assessment moves from cost efficiency to cost effectiveness to a full social cost-benefit analysis (SCBA) with an associated sensitivity test. In addition to demonstrating techniques of analysis, the paper brings out many of the challenges in understanding how safer drinking-water impacts on people's livelihoods. The SCBA shows the case study intervention is justified economically, though the sensitivity test suggests 'downside' vulnerability. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. National and International Security Applications of Cryogenic Detectors—Mostly Nuclear Safeguards

    NASA Astrophysics Data System (ADS)

    Rabin, Michael W.

    2009-12-01

    As with science, so with security—in both arenas, the extraordinary sensitivity of cryogenic sensors enables high-confidence detection and high-precision measurement even of the faintest signals. Science applications are more mature, but several national and international security applications have been identified where cryogenic detectors have high potential payoff. International safeguards and nuclear forensics are areas needing new technology and methods to boost speed, sensitivity, precision and accuracy. Successfully applied, improved nuclear materials analysis will help constrain nuclear materials diversion pathways and contribute to treaty verification. Cryogenic microcalorimeter detectors for X-ray, gamma-ray, neutron, and alpha-particle spectrometry are under development with these aims in mind. In each case the unsurpassed energy resolution of microcalorimeters reveals previously invisible spectral features of nuclear materials. Preliminary results of quantitative analysis indicate substantial improvements are still possible, but significant work will be required to fully understand the ultimate performance limits.

  15. Forensic applications of desorption electrospray ionisation mass spectrometry (DESI-MS).

    PubMed

    Morelato, Marie; Beavis, Alison; Kirkbride, Paul; Roux, Claude

    2013-03-10

    Desorption electrospray ionisation mass spectrometry (DESI-MS) is an emerging analytical technique that enables in situ mass spectrometric analysis of specimens under ambient conditions. It has been successfully applied to a large range of forensically relevant materials. This review assesses and highlights forensic applications of DESI-MS including the analysis and detection of illicit drugs, explosives, chemical warfare agents, inks and documents, fingermarks, gunshot residues and drugs of abuse in urine and plasma specimens. The minimal specimen preparation required for analysis and the sensitivity of detection achieved offer great advantages, especially in the field of forensic science. Crown Copyright © 2013. Published by Elsevier Ireland Ltd. All rights reserved.

  16. SASS wind ambiguity removal by direct minimization. II - Use of smoothness and dynamical constraints

    NASA Technical Reports Server (NTRS)

    Hoffman, R. N.

    1984-01-01

    A variational analysis method (VAM) is used to remove the ambiguity of the Seasat-A Satellite Scatterometer (SASS) winds. The VAM yields the best fit to the data by minimizing an objective function S which is a measure of the lack of fit. The SASS data are described and the function S and the analysis procedure are defined. Analyses of a single ship report which are analogous to Green's functions are presented. The analysis procedure is tuned and its sensitivity is described using the QE II storm. The procedure is then applied to a case study of September 6, 1978, south of Japan.

  17. Laser-induced fluorescence microscopic system using an optical parametric oscillator for tunable detection in microchip analysis.

    PubMed

    Kumemura, Momoko; Odake, Tamao; Korenaga, Takashi

    2005-06-01

    A laser-induced fluorescence microscopic system based on optical parametric oscillation has been constructed as a tunable detector for microchip analysis. The detection limit of sulforhodamine B (Ex. 520 nm, Em. 570 nm) was 0.2 mumol, which was approximately eight orders of magnitude better than with a conventional fluorophotometer. The system was applied to the determination of fluorescence-labeled DNA (Ex. 494 nm, Em. 519 nm) in a microchannel and the detection limit reached a single molecule. These results showed the feasibility of this system as a highly sensitive and tunable fluorescence detector for microchip analysis.

  18. An efficient method of reducing glass dispersion tolerance sensitivity

    NASA Astrophysics Data System (ADS)

    Sparrold, Scott W.; Shepard, R. Hamilton

    2014-12-01

    Constraining the Seidel aberrations of optical surfaces is a common technique for relaxing tolerance sensitivities in the optimization process. We offer an observation that a lens's Abbe number tolerance is directly related to the magnitude by which its longitudinal and transverse color are permitted to vary in production. Based on this observation, we propose a computationally efficient and easy-to-use merit function constraint for relaxing dispersion tolerance sensitivity. Using the relationship between an element's chromatic aberration and dispersion sensitivity, we derive a fundamental limit for lens scale and power that is capable of achieving high production yield for a given performance specification, which provides insight on the point at which lens splitting or melt fitting becomes necessary. The theory is validated by comparing its predictions to a formal tolerance analysis of a Cooke Triplet, and then applied to the design of a 1.5x visible linescan lens to illustrate optimization for reduced dispersion sensitivity. A selection of lenses in high volume production is then used to corroborate the proposed method of dispersion tolerance allocation.

  19. Determination of Pain Phenotypes in Knee Osteoarthritis: A Latent Class Analysis using Data from the Osteoarthritis Initiative Study

    PubMed Central

    Kittelson, Andrew J.; Stevens-Lapsley, Jennifer E.; Schmiege, Sarah J.

    2017-01-01

    Objective Knee osteoarthritis (OA) is a broadly applied diagnosis that may encompass multiple subtypes of pain. The purpose of this study was to identify phenotypes of knee OA, using measures from the following pain-related domains: 1) knee OA pathology, 2) psychological distress, and 3) altered pain neurophysiology. Methods Data were selected from a total of 3494 participants at Visit #6 of the Osteoarthritis Initiative (OAI) study. Latent Class Analysis was applied to the following variables: radiographic OA severity, quadriceps strength, Body Mass Index (BMI), Charlson Comorbidity Index (CCI), Center for Epidemiologic Studies Depression subscale (CES-D), Coping Strategies Questionnaire-Catastrophizing subscale (CSQ-Cat), number of bodily pain sites, and knee joint tenderness at 4 sites. Resulting classes were compared on the following demographic and clinical factors: age, sex, pain severity, disability, walking speed, and use of arthritis-related healthcare. Results A four-class model was identified. Class 1 (4% of the study population) had higher CCI scores. Class 2 (24%) had higher knee joint sensitivity. Class 3 (10%) had greater psychological distress. Class 4 (62%) had lesser radiographic OA, little psychological involvement, greater strength, and less pain sensitivity. Additionally, Class 1 was the oldest, on average. Class 4 was the youngest, had the lowest disability, and least pain. Class 3 had the worst disability and most pain. Conclusions Four distinct pain phenotypes of knee OA were identified. Psychological factors, comorbidity status, and joint sensitivity appear to be important in defining phenotypes of knee OA-related pain. PMID:26414884

  20. Determination of Pain Phenotypes in Knee Osteoarthritis: A Latent Class Analysis Using Data From the Osteoarthritis Initiative.

    PubMed

    Kittelson, Andrew J; Stevens-Lapsley, Jennifer E; Schmiege, Sarah J

    2016-05-01

    Knee osteoarthritis (OA) is a broadly applied diagnosis that may describe multiple subtypes of pain. The purpose of this study was to identify phenotypes of knee OA, using measures from the following pain-related domains: 1) knee OA pathology, 2) psychological distress, and 3) altered pain neurophysiology. Data were selected from a total of 3,494 participants at visit 6 of the Osteoarthritis Initiative study. Latent class analysis was applied to the following variables: radiographic OA severity, quadriceps strength, body mass index, the Charlson Comorbidity Index (CCI), the Center for Epidemiologic Studies Depression Scale, the Coping Strategies Questionnaire-Catastrophizing subscale, number of bodily pain sites, and knee joint tenderness at 4 sites. The resulting classes were compared on the following demographic and clinical factors: age, sex, pain severity, disability, walking speed, and use of arthritis-related health care. A 4-class model was identified. Class 1 (4% of the study population) had higher CCI scores. Class 2 (24%) had higher knee joint sensitivity. Class 3 (10%) had greater psychological distress. Class 4 (62%) had lesser radiographic OA, little psychological involvement, greater strength, and less pain sensitivity. Additionally, class 1 was the oldest, on average. Class 4 was the youngest, had the lowest disability, and least pain. Class 3 had the worst disability and most pain. Four distinct pain phenotypes of knee OA were identified. Psychological factors, comorbidity status, and joint sensitivity appear to be important in defining phenotypes of knee OA-related pain. © 2016, American College of Rheumatology.

  1. Immune biosensors based on the SPR and TIRE: efficiency of their application for bacteria determination

    NASA Astrophysics Data System (ADS)

    Starodub, N. F.; Ogorodniichuk, J.; Lebedeva, T.; Shpylovyy, P.

    2013-11-01

    In this work we have designed high-specific biosensors for Salmonella typhimurium detection based on the surface plasmon resonance (SPR) and total internal reflection ellipsometry (TIRE). It has been demonstrated high selectivity and sensitivity of analysis. As a registering part for our experiments the Spreeta (USA) and "Plasmonotest" (Ukraine) with flowing cell have been applied among of SPR device. Previous researches confirmed an efficiency of SPR biosensors using for detecting of specific antigen-antibody interactions therefore this type of reactions with some previous preparations of surface binding layer was used as reactive part. It has been defined that in case with Spreeta sensitivity was on the level 103 - 107 cells/ml. Another biosensor based on the SPR has shown the sensitivity within 101 - 106 cells/ml. Maximal sensitivity was on the level of several cells in 10 ml (up to the fact that less than 5 cells) which has been obtained using the biosensor based on TIRE.

  2. Proteomic Signatures of the Zebrafish (Danio rerio) Embryo: Sensitivity and Specificity in Toxicity Assessment of Chemicals.

    PubMed

    Hanisch, Karen; Küster, Eberhard; Altenburger, Rolf; Gündel, Ulrike

    2010-01-01

    Studies using embryos of the zebrafish Danio rerio (DarT) instead of adult fish for characterising the (eco-) toxic potential of chemicals have been proposed as animal replacing methods. Effect analysis at the molecular level might enhance sensitivity, specificity, and predictive value of the embryonal studies. The present paper aimed to test the potential of toxicoproteomics with zebrafish eleutheroembryos for sensitive and specific toxicity assessment. 2-DE-based toxicoproteomics was performed applying low-dose (EC(10)) exposure for 48 h with three-model substances Rotenone, 4,6-dinitro-o-cresol (DNOC) and Diclofenac. By multivariate "pattern-only" PCA and univariate statistical analyses, alterations in the embryonal proteome were detectable in nonetheless visibly intact organisms and treatment with the three substances was distinguishable at the molecular level. Toxicoproteomics enabled the enhancement of sensitivity and specificity of the embryonal toxicity assay and bear the potency to identify protein markers serving as general stress markers and early diagnosis of toxic stress.

  3. Proteomic Signatures of the Zebrafish (Danio rerio) Embryo: Sensitivity and Specificity in Toxicity Assessment of Chemicals

    PubMed Central

    Hanisch, Karen; Küster, Eberhard; Altenburger, Rolf; Gündel, Ulrike

    2010-01-01

    Studies using embryos of the zebrafish Danio rerio (DarT) instead of adult fish for characterising the (eco-) toxic potential of chemicals have been proposed as animal replacing methods. Effect analysis at the molecular level might enhance sensitivity, specificity, and predictive value of the embryonal studies. The present paper aimed to test the potential of toxicoproteomics with zebrafish eleutheroembryos for sensitive and specific toxicity assessment. 2-DE-based toxicoproteomics was performed applying low-dose (EC10) exposure for 48 h with three-model substances Rotenone, 4,6-dinitro-o-cresol (DNOC) and Diclofenac. By multivariate “pattern-only” PCA and univariate statistical analyses, alterations in the embryonal proteome were detectable in nonetheless visibly intact organisms and treatment with the three substances was distinguishable at the molecular level. Toxicoproteomics enabled the enhancement of sensitivity and specificity of the embryonal toxicity assay and bear the potency to identify protein markers serving as general stress markers and early diagnosis of toxic stress. PMID:22084678

  4. ACOSS Eight (Active Control of Space Structures), Phase 2

    DTIC Science & Technology

    1981-09-01

    A-2 A-2 Nominal Model - Equipment Section and Solar Panels ....... A-3 A-3 Nominal Model - Upper Support .-uss ...... ............ A-4 A...sensitivity analysis technique ef selecting critical system parameters is applied tc the Diaper tetrahedral truss structure (See Section 4-2...and solar panels are omitted. The precision section is mounted on isolators to inertially r•" I fixed rigid support. The mode frequencies of this

  5. Thermal and structural alternations in CuAlMnNi shape memory alloy by the effect of different pressure applications

    NASA Astrophysics Data System (ADS)

    Canbay, Canan Aksu; Polat, Tercan

    2017-09-01

    In this work the effects of the applied pressure on the characteristic transformation temperatures, the high temperature order-disorder phase transitions, the variation in diffraction peaks and the surface morphology of the CuAlMnNi shape memory alloy was investigated. The evolution of the transformation temperatures was studied by differential scanning calorimetry (DSC) with different heating and cooling rates. The differential thermal analysis measurements were performed to obtain the ordered-disordered phase transformations from room temperature to 900 °C. The characteristic transformation temperatures and the thermodynamic parameters were highly sensitive to variations in the applied pressure and also the applied pressure affected the thermodynamic parameters. The activation energy of the sample according to applied pressure values calculated by Kissinger method. The structural changes of the samples were studied by X-ray diffraction (XRD) measurements and by optical microscope observations at room temperature.

  6. A modeling study examining the impact of nutrient boundaries ...

    EPA Pesticide Factsheets

    A mass balance eutrophication model, Gulf of Mexico Dissolved Oxygen Model (GoMDOM), has been developed and applied to describe nitrogen, phosphorus and primary production in the Louisiana shelf of the Gulf of Mexico. Features of this model include bi-directional boundary exchanges, an empirical site-specific light attenuation equation, estimates of 56 river loads and atmospheric loads. The model was calibrated for 2006 by comparing model output to observations in zones that represent different locations in the Gulf. The model exhibited reasonable skill in simulating the phosphorus and nitrogen field data and primary production observations. The model was applied to generate a nitrogen mass balance estimate, to perform sensitivity analysis to compare the importance of the nutrient boundary concentrations versus the river loads on nutrient concentrations and primary production within the shelf, and to provide insight into the relative importance of different limitation factors on primary production. The mass budget showed the importance of the rivers as the major external nitrogen source while the atmospheric load contributed approximately 2% of the total external load. Sensitivity analysis showed the importance of accurate estimates of boundary nitrogen concentrations on the nitrogen levels on the shelf, especially at regions further away from the river influences. The boundary nitrogen concentrations impacted primary production less than nitrogen concent

  7. New methods for time-resolved fluorescence spectroscopy data analysis based on the Laguerre expansion technique--applications in tissue diagnosis.

    PubMed

    Jo, J A; Marcu, L; Fang, Q; Papaioannou, T; Qiao, J H; Fishbein, M C; Beseth, B; Dorafshar, A H; Reil, T; Baker, D; Freischlag, J

    2007-01-01

    A new deconvolution method for the analysis of time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) data is introduced and applied for tissue diagnosis. The intrinsic TR-LIFS decays are expanded on a Laguerre basis, and the computed Laguerre expansion coefficients (LEC) are used to characterize the sample fluorescence emission. The method was applied for the diagnosis of atherosclerotic vulnerable plaques. At a first stage, using a rabbit atherosclerotic model, 73 TR-LIFS in-vivo measurements from the normal and atherosclerotic aorta segments of eight rabbits were taken. The Laguerre deconvolution technique was able to accurately deconvolve the TR-LIFS measurements. More interesting, the LEC reflected the changes in the arterial biochemical composition and provided discrimination of lesions rich in macrophages/foam-cells with high sensitivity (> 85%) and specificity (> 95%). At a second stage, 348 TR-LIFS measurements were obtained from the explanted carotid arteries of 30 patients. Lesions with significant inflammatory cells (macrophages/foam-cells and lymphocytes) were detected with high sensitivity (> 80%) and specificity (> 90%), using LEC-based classifiers. This study has demonstrated the potential of using TR-LIFS information by means of LEC for in vivo tissue diagnosis, and specifically for detecting inflammation in atherosclerotic lesions, a key marker of plaque vulnerability.

  8. Incidence of tuberculous meningitis in France, 2000: a capture-recapture analysis.

    PubMed

    Cailhol, J; Che, D; Jarlier, V; Decludt, B; Robert, J

    2005-07-01

    To estimate the incidence of culture-positive and culture-negative tuberculous meningitis (TBM) in France in 2000. Capture-recapture method using two unrelated sources of data: the tuberculosis (TB) mandatory notification system (MNTB), recording patients treated by anti-tuberculosis drugs, and a survey by the National Reference Centre (NRC) for mycobacterial drug resistance, recording culture-positive TBM. Of 112 cases of TBM reported to the MNTB, 28 culture-positive and 34 culture-negative meningitis cases were validated (17 duplicates, 3 cases from outside France, 21 false notifications, and 9 lost records were excluded). The NRC recorded 31 culture-positive cases, including 21 known by the MNTB. When the capture-recapture method was applied to the reported culture-positive meningitis cases, the estimated number of meningitis cases was 41 and the incidence was 0.7 cases per million. Sensitivity was 75.6% for the NRC, 68.3% for the MNTB, and 92.7% for both systems together. When sensitivity of the MNTB for culture-positive cases was applied to culture-negative meningitis, the total estimated number of culture-negative meningitis cases was 50 and the incidence was 0.85 cases per million. TBM is underestimated in France. Capture-recapture analysis using different sources to better estimate its incidence is of great interest.

  9. Analysis of Serum Total and Free PSA Using Immunoaffinity Depletion Coupled to SRM: Correlation with Clinical Immunoassay Tests

    PubMed Central

    Liu, Tao; Hossain, Mahmud; Schepmoes, Athena A.; Fillmore, Thomas L.; Sokoll, Lori J.; Kronewitter, Scott R.; Izmirlian, Grant; Shi, Tujin; Qian, Wei-Jun; Leach, Robin J.; Thompson, Ian M.; Chan, Daniel W.; Smith, Richard D.; Kagan, Jacob; Srivastava, Sudhir; Rodland, Karin D.; Camp, David G.

    2012-01-01

    Recently, selected reaction monitoring mass spectrometry (SRM-MS) has been more frequently applied to measure low abundance biomarker candidates in tissues and biofluids, owing to its high sensitivity and specificity, simplicity of assay configuration, and exceptional multiplexing capability. In this study, we report for the first time the development of immunoaffinity depletion-based workflows and SRM-MS assays that enable sensitive and accurate quantification of total and free prostate-specific antigen (PSA) in serum without the requirement for specific PSA antibodies. Low ng/mL level detection of both total and free PSA was consistently achieved in both PSA-spiked female serum samples and actual patient serum samples. Moreover, comparison of the results obtained when SRM PSA assays and conventional immunoassays were applied to the same samples showed good correlation in several independent clinical serum sample sets. These results demonstrate that the workflows and SRM assays developed here provide an attractive alternative for reliably measuring candidate biomarkers in human blood, without the need to develop affinity reagents. Furthermore, the simultaneous measurement of multiple biomarkers, including the free and bound forms of PSA, can be performed in a single multiplexed analysis using high-resolution liquid chromatographic separation coupled with SRM-MS. PMID:22846433

  10. Experimental study on the crack detection with optimized spatial wavelet analysis and windowing

    NASA Astrophysics Data System (ADS)

    Ghanbari Mardasi, Amir; Wu, Nan; Wu, Christine

    2018-05-01

    In this paper, a high sensitive crack detection is experimentally realized and presented on a beam under certain deflection by optimizing spatial wavelet analysis. Due to the crack existence in the beam structure, a perturbation/slop singularity is induced in the deflection profile. Spatial wavelet transformation works as a magnifier to amplify the small perturbation signal at the crack location to detect and localize the damage. The profile of a deflected aluminum cantilever beam is obtained for both intact and cracked beams by a high resolution laser profile sensor. Gabor wavelet transformation is applied on the subtraction of intact and cracked data sets. To improve detection sensitivity, scale factor in spatial wavelet transformation and the transformation repeat times are optimized. Furthermore, to detect the possible crack close to the measurement boundaries, wavelet transformation edge effect, which induces large values of wavelet coefficient around the measurement boundaries, is efficiently reduced by introducing different windowing functions. The result shows that a small crack with depth of less than 10% of the beam height can be localized with a clear perturbation. Moreover, the perturbation caused by a crack at 0.85 mm away from one end of the measurement range, which is covered by wavelet transform edge effect, emerges by applying proper window functions.

  11. Agrochemical fate models applied in agricultural areas from Colombia

    NASA Astrophysics Data System (ADS)

    Garcia-Santos, Glenda; Yang, Jing; Andreoli, Romano; Binder, Claudia

    2010-05-01

    The misuse application of pesticides in mainly agricultural catchments can lead to severe problems for humans and environment. Especially in developing countries where there is often found overuse of agrochemicals and incipient or lack of water quality monitoring at local and regional levels, models are needed for decision making and hot spots identification. However, the complexity of the water cycle contrasts strongly with the scarce data availability, limiting the number of analysis, techniques, and models available to researchers. Therefore there is a strong need for model simplification able to appropriate model complexity and still represent the processes. We have developed a new model so-called Westpa-Pest to improve water quality management of an agricultural catchment located in the highlands of Colombia. Westpa-Pest is based on the fully distributed hydrologic model Wetspa and a fate pesticide module. We have applied a multi-criteria analysis for model selection under the conditions and data availability found in the region and compared with the new developed Westpa-Pest model. Furthermore, both models were empirically calibrated and validated. The following questions were addressed i) what are the strengths and weaknesses of the models?, ii) which are the most sensitive parameters of each model?, iii) what happens with uncertainties in soil parameters?, and iv) how sensitive are the transfer coefficients?

  12. Sensitivity analysis and uncertainty estimation in ash concentration simulations and tephra deposit daily forecasted at Mt. Etna, in Italy

    NASA Astrophysics Data System (ADS)

    Prestifilippo, Michele; Scollo, Simona; Tarantola, Stefano

    2015-04-01

    The uncertainty in volcanic ash forecasts may depend on our knowledge of the model input parameters and our capability to represent the dynamic of an incoming eruption. Forecasts help governments to reduce risks associated with volcanic eruptions and for this reason different kinds of analysis that help to understand the effect that each input parameter has on model outputs are necessary. We present an iterative approach based on the sequential combination of sensitivity analysis, parameter estimation procedure and Monte Carlo-based uncertainty analysis, applied to the lagrangian volcanic ash dispersal model PUFF. We modify the main input parameters as the total mass, the total grain-size distribution, the plume thickness, the shape of the eruption column, the sedimentation models and the diffusion coefficient, perform thousands of simulations and analyze the results. The study is carried out on two different Etna scenarios: the sub-plinian eruption of 22 July 1998 that formed an eruption column rising 12 km above sea level and lasted some minutes and the lava fountain eruption having features similar to the 2011-2013 events that produced eruption column high up to several kilometers above sea level and lasted some hours. Sensitivity analyses and uncertainty estimation results help us to address the measurements that volcanologists should perform during volcanic crisis to reduce the model uncertainty.

  13. Systematic analysis of Ca2+ homeostasis in Saccharomyces cerevisiae based on chemical-genetic interaction profiles

    PubMed Central

    Ghanegolmohammadi, Farzan; Yoshida, Mitsunori; Ohnuki, Shinsuke; Sukegawa, Yuko; Okada, Hiroki; Obara, Keisuke; Kihara, Akio; Suzuki, Kuninori; Kojima, Tetsuya; Yachie, Nozomu; Hirata, Dai; Ohya, Yoshikazu

    2017-01-01

    We investigated the global landscape of Ca2+ homeostasis in budding yeast based on high-dimensional chemical-genetic interaction profiles. The morphological responses of 62 Ca2+-sensitive (cls) mutants were quantitatively analyzed with the image processing program CalMorph after exposure to a high concentration of Ca2+. After a generalized linear model was applied, an analysis of covariance model was used to detect significant Ca2+–cls interactions. We found that high-dimensional, morphological Ca2+–cls interactions were mixed with positive (86%) and negative (14%) chemical-genetic interactions, whereas one-dimensional fitness Ca2+–cls interactions were all negative in principle. Clustering analysis with the interaction profiles revealed nine distinct gene groups, six of which were functionally associated. In addition, characterization of Ca2+–cls interactions revealed that morphology-based negative interactions are unique signatures of sensitized cellular processes and pathways. Principal component analysis was used to discriminate between suppression and enhancement of the Ca2+-sensitive phenotypes triggered by inactivation of calcineurin, a Ca2+-dependent phosphatase. Finally, similarity of the interaction profiles was used to reveal a connected network among the Ca2+ homeostasis units acting in different cellular compartments. Our analyses of high-dimensional chemical-genetic interaction profiles provide novel insights into the intracellular network of yeast Ca2+ homeostasis. PMID:28566553

  14. Emulation and Sensitivity Analysis of the Community Multiscale Air Quality Model for a UK Ozone Pollution Episode.

    PubMed

    Beddows, Andrew V; Kitwiroon, Nutthida; Williams, Martin L; Beevers, Sean D

    2017-06-06

    Gaussian process emulation techniques have been used with the Community Multiscale Air Quality model, simulating the effects of input uncertainties on ozone and NO 2 output, to allow robust global sensitivity analysis (SA). A screening process ranked the effect of perturbations in 223 inputs, isolating the 30 most influential from emissions, boundary conditions (BCs), and reaction rates. Community Multiscale Air Quality (CMAQ) simulations of a July 2006 ozone pollution episode in the UK were made with input values for these variables plus ozone dry deposition velocity chosen according to a 576 point Latin hypercube design. Emulators trained on the output of these runs were used in variance-based SA of the model output to input uncertainties. Performing these analyses for every hour of a 21 day period spanning the episode and several days on either side allowed the results to be presented as a time series of sensitivity coefficients, showing how the influence of different input uncertainties changed during the episode. This is one of the most complex models to which these methods have been applied, and here, they reveal detailed spatiotemporal patterns of model sensitivities, with NO and isoprene emissions, NO 2 photolysis, ozone BCs, and deposition velocity being among the most influential input uncertainties.

  15. Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis

    NASA Technical Reports Server (NTRS)

    Kallman, Tim

    2006-01-01

    A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn 011 many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.

  16. Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis

    NASA Technical Reports Server (NTRS)

    Kallman, Tim

    2006-01-01

    A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn on many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.

  17. Hydrologic sensitivity of headwater catchments to climate and landscape variability

    NASA Astrophysics Data System (ADS)

    Kelleher, Christa; Wagener, Thorsten; McGlynn, Brian; Nippgen, Fabian; Jencso, Kelsey

    2013-04-01

    Headwater streams cumulatively represent an extensive portion of the United States stream network, yet remain largely unmonitored and unmapped. As such, we have limited understanding of how these systems will respond to change, knowledge that is important for preserving these unique ecosystems, the services they provide, and the biodiversity they support. We compare responses across five adjacent headwater catchments located in Tenderfoot Creek Experimental Forest in Montana, USA, to understand how local differences may affect the sensitivity of headwaters to change. We utilize global, variance-based sensitivity analysis to understand which aspects of the physical system (e.g., vegetation, topography, geology) control the variability in hydrologic behavior across these basins, and how this varies as a function of time (and therefore climate). Basin fluxes and storages, including evapotranspiration, snow water equivalent and melt, soil moisture and streamflow, are simulated using the Distributed Hydrology-Vegetation-Soil Model (DHSVM). Sensitivity analysis is applied to quantify the importance of different physical parameters to the spatial and temporal variability of different water balance components, allowing us to map similarities and differences in these controls through space and time. Our results show how catchment influences on fluxes vary across seasons (thus providing insight into transferability of knowledge in time), and how they vary across catchments with different physical characteristics (providing insight into transferability in space).

  18. Accuracy and sensitivity analysis on seismic anisotropy parameter estimation

    NASA Astrophysics Data System (ADS)

    Yan, Fuyong; Han, De-Hua

    2018-04-01

    There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.

  19. Bacteriophage-based assays for the rapid detection of rifampicin resistance in Mycobacterium tuberculosis: a meta-analysis.

    PubMed

    Pai, Madhukar; Kalantri, Shriprakash; Pascopella, Lisa; Riley, Lee W; Reingold, Arthur L

    2005-10-01

    To summarize, using meta-analysis, the accuracy of bacteriophage-based assays for the detection of rifampicin resistance in Mycobacterium tuberculosis. By searching multiple databases and sources we identified a total of 21 studies eligible for meta-analysis. Of these, 14 studies used phage amplification assays (including eight studies on the commercial FASTPlaque-TB kits), and seven used luciferase reporter phage (LRP) assays. Sensitivity, specificity, and agreement between phage assay and reference standard (e.g. agar proportion method or BACTEC 460) results were the main outcomes of interest. When performed on culture isolates (N=19 studies), phage assays appear to have relatively high sensitivity and specificity. Eleven of 19 (58%) studies reported sensitivity and specificity estimates > or =95%, and 13 of 19 (68%) studies reported > or =95% agreement with reference standard results. Specificity estimates were slightly lower and more variable than sensitivity; 5 of 19 (26%) studies reported specificity <90%. Only two studies performed phage assays directly on sputum specimens; although one study reported sensitivity and specificity of 100 and 99%, respectively, another reported sensitivity of 86% and specificity of 73%. Current evidence is largely restricted to the use of phage assays for the detection of rifampicin resistance in culture isolates. When used on culture isolates, these assays appear to have high sensitivity, but variable and slightly lower specificity. In contrast, evidence is lacking on the accuracy of these assays when they are directly applied to sputum specimens. If phage-based assays can be directly used on clinical specimens and if they are shown to have high accuracy, they have the potential to improve the diagnosis of MDR-TB. However, before phage assays can be successfully used in routine practice, several concerns have to be addressed, including unexplained false positives in some studies, potential for contamination and indeterminate results.

  20. In vivo diagnosis of mammary adenocarcinoma using Raman spectroscopy: an animal model study

    NASA Astrophysics Data System (ADS)

    Bitar, R. A.; Ribeiro, D. G.; dos Santos, E. A. P.; Ramalho, L. N. Z.; Ramalho, F. S.; Martin, A. A.; Martinho, H. S.

    2010-02-01

    Breast cancer is the most frequent cancer type in women Worldwide. Sensitivity and specificity of clinical breast examinations have been estimated from clinical trials to be approximately 54 % and 94 %, respectively. Further, approximately 95 % of all positive breast cancer screenings turn out to be false-positive. The optimal method for early detection should be both highly sensitive to ensure that all cancers are detected, and also highly specific to avoid the humanistic and economic costs associated with false-positive results. In vivo optical spectroscopy techniques, Raman in particular, have been pointed out as promising tools to improve the accuracy of screening mammography. The aim of the present study was to apply FT-Raman spectroscopy to discriminate normal and adenocarcinoma breast tissues of Sprague-Dawley female rats. The study was performed on 32 rats divided in the control (N=5) and experimental (N=27) groups. Histological analysis indicated that mammary hyperplasia, cribriform, papillary and solid adenocarcinomas were found in the experimental group subjects. The spectral collection was made using a commercial FT-Raman Spectrometer (Bruker RFS100) equipped with fiber-optic probe (RamProbe) and the spectral region between 900 and 1800 cm-1 was analyzed. Principal Components Analysis, Cluster Analysis, and Linear Discriminant Analysis with cross-validation were applied as spectral classification algorithm. As concluding remarks it is show that normal and adenocarcinoma tissues discriminations was possible (correct proportion for Transcutaneous collection mode was 80.80% and for "Open Sky" mode was 91.70%); however, a conclusive diagnosis among the four lesion subtypes was not possible.

  1. Identification of material constants for piezoelectric transformers by three-dimensional, finite-element method and a design-sensitivity method.

    PubMed

    Joo, Hyun-Woo; Lee, Chang-Hwan; Rho, Jong-Seok; Jung, Hyun-Kyo

    2003-08-01

    In this paper, an inversion scheme for piezoelectric constants of piezoelectric transformers is proposed. The impedance of piezoelectric transducers is calculated using a three-dimensional finite element method. The validity of this is confirmed experimentally. The effects of material coefficients on piezoelectric transformers are investigated numerically. Six material coefficient variables for piezoelectric transformers were selected, and a design sensitivity method was adopted as an inversion scheme. The validity of the proposed method was confirmed by step-up ratio calculations. The proposed method is applied to the analysis of a sample piezoelectric transformer, and its resonance characteristics are obtained by numerically combined equivalent circuit method.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smed, T.

    Traditional eigenvalue sensitivity for power systems requires the formulation of the system matrix, which lacks sparsity. In this paper, a new sensitivity analysis, derived for a sparse formulation, is presented. Variables that are computed as intermediate results in established eigen value programs for power systems, but not used further, are given a new interpretation. The effect of virtually any control action can be assessed based on a single eigenvalue-eigenvector calculation. In particular, the effect of active and reactive power modulation can be found as a multiplication of two or three complex numbers. The method is illustrated in an example formore » a large power system when applied to the control design for an HVDC-link.« less

  3. Digital imaging biomarkers feed machine learning for melanoma screening.

    PubMed

    Gareau, Daniel S; Correa da Rosa, Joel; Yagerman, Sarah; Carucci, John A; Gulati, Nicholas; Hueto, Ferran; DeFazio, Jennifer L; Suárez-Fariñas, Mayte; Marghoob, Ashfaq; Krueger, James G

    2017-07-01

    We developed an automated approach for generating quantitative image analysis metrics (imaging biomarkers) that are then analysed with a set of 13 machine learning algorithms to generate an overall risk score that is called a Q-score. These methods were applied to a set of 120 "difficult" dermoscopy images of dysplastic nevi and melanomas that were subsequently excised/classified. This approach yielded 98% sensitivity and 36% specificity for melanoma detection, approaching sensitivity/specificity of expert lesion evaluation. Importantly, we found strong spectral dependence of many imaging biomarkers in blue or red colour channels, suggesting the need to optimize spectral evaluation of pigmented lesions. © 2016 The Authors. Experimental Dermatology Published by John Wiley & Sons Ltd.

  4. [LC-MS/MS analysis of determination of strychnine and brucine in formaldehyde fixed tissue].

    PubMed

    Zhan, Lan-fen; Liu, Ming-dong; Yan, You-yi; Ye, Yi; Wang, Wei; Wang, Zhi-hui; Zhao, Jun-hong; Liao, Lin-chuan

    2012-10-01

    To establish a method for determination of strychnine and brucine in formaldehyde fixed tissue by LC-MS/MS analysis. The samples were pretreated with solid phase extraction using SCX cartridges and separated on SB-C18 column with mobile phase 0.1% formic acid : 0.1% formic acid-acetonitrile (75:25). Electrospray ionization (ESI) source was utilized and operated in positive ion mode. Multiple reactions monitoring (MRM) mode was applied. External standard method was applied for quantitation. The chromatographic separation of strychnine and brucine in formaldehyde fixed nephritic and hepatic tissues resulted successfully. The standard curve was linear in the range of 0.002-2.0 microg/g for strychnine and brucine in formaldehyde fixed tissues, and the correlation coefficient was more than 0.996. The limits of detection (LOD) of strychnine and brucine in nephritic tissues were 0.06ng/g and 0.03 ng/g, respectively. The LOD of both chemicals were 0.3 ng/g in hepatic tissues. The extraction recovery rate was more than 74.5%. The precision of intra-day and inter-day were both less than 8.2%. Strychnine and brucine can be sensitive to be determined in formaldehyde fixed tissue by LC-MS/MS analysis. It can be applied in the forensic toxicological analysis.

  5. Water quality modeling for urban reach of Yamuna river, India (1999-2009), using QUAL2Kw

    NASA Astrophysics Data System (ADS)

    Sharma, Deepshikha; Kansal, Arun; Pelletier, Greg

    2017-06-01

    The study was to characterize and understand the water quality of the river Yamuna in Delhi (India) prior to an efficient restoration plan. A combination of collection of monitored data, mathematical modeling, sensitivity, and uncertainty analysis has been done using the QUAL2Kw, a river quality model. The model was applied to simulate DO, BOD, total coliform, and total nitrogen at four monitoring stations, namely Palla, Old Delhi Railway Bridge, Nizamuddin, and Okhla for 10 years (October 1999-June 2009) excluding the monsoon seasons (July-September). The study period was divided into two parts: monthly average data from October 1999-June 2004 (45 months) were used to calibrate the model and monthly average data from October 2005-June 2009 (45 months) were used to validate the model. The R2 for CBODf and TN lies within the range of 0.53-0.75 and 0.68-0.83, respectively. This shows that the model has given satisfactory results in terms of R2 for CBODf, TN, and TC. Sensitivity analysis showed that DO, CBODf, TN, and TC predictions are highly sensitive toward headwater flow and point source flow and quality. Uncertainty analysis using Monte Carlo showed that the input data have been simulated in accordance with the prevalent river conditions.

  6. Health economic comparison of SLIT allergen and SCIT allergoid immunotherapy in patients with seasonal grass-allergic rhinoconjunctivitis in Germany.

    PubMed

    Verheggen, Bram G; Westerhout, Kirsten Y; Schreder, Carl H; Augustin, Matthias

    2015-01-01

    Allergoids are chemically modified allergen extracts administered to reduce allergenicity and to maintain immunogenicity. Oralair® (the 5-grass tablet) is a sublingual native grass allergen tablet for pre- and co-seasonal treatment. Based on a literature review, meta-analysis, and cost-effectiveness analysis the relative effects and costs of the 5-grass tablet versus a mix of subcutaneous allergoid compounds for grass pollen allergic rhinoconjunctivitis were assessed. A Markov model with a time horizon of nine years was used to assess the costs and effects of three-year immunotherapy treatment. Relative efficacy expressed as standardized mean differences was estimated using an indirect comparison on symptom scores extracted from available clinical trials. The Rhinitis Symptom Utility Index (RSUI) was applied as a proxy to estimate utility values for symptom scores. Drug acquisition and other medical costs were derived from published sources as well as estimates for resource use, immunotherapy persistence, and occurrence of asthma. The analysis was executed from the German payer's perspective, which includes payments of the Statutory Health Insurance (SHI) and additional payments by insurants. Comprehensive deterministic and probabilistic sensitivity analyses and different scenarios were performed to test the uncertainty concerning the incremental model outcomes. The applied model predicted a cost-utility ratio of the 5-grass tablet versus a market mix of injectable allergoid products of € 12,593 per QALY in the base case analysis. Predicted incremental costs and QALYs were € 458 (95% confidence interval, CI: € 220; € 739) and 0.036 (95% CI: 0.002; 0.078), respectively. Compared to the allergoid mix the probability of the 5-grass tablet being the most cost-effective treatment option was predicted to be 76% at a willingness-to-pay threshold of € 20,000. The results were most sensitive to changes in efficacy estimates, duration of the pollen season, and immunotherapy persistence rates. This analysis suggests the sublingual native 5-grass tablet to be cost-effective relative to a mix of subcutaneous allergoid compounds. The robustness of these statements has been confirmed in extensive sensitivity and scenario analyses.

  7. Aeroelastic optimization methodology for viscous and turbulent flows

    NASA Astrophysics Data System (ADS)

    Barcelos Junior, Manuel Nascimento Dias

    2007-12-01

    In recent years, the development of faster computers and parallel processing allowed the application of high-fidelity analysis methods to the aeroelastic design of aircraft. However, these methods are restricted to the final design verification, mainly due to the computational cost involved in iterative design processes. Therefore, this work is concerned with the creation of a robust and efficient aeroelastic optimization methodology for inviscid, viscous and turbulent flows by using high-fidelity analysis and sensitivity analysis techniques. Most of the research in aeroelastic optimization, for practical reasons, treat the aeroelastic system as a quasi-static inviscid problem. In this work, as a first step toward the creation of a more complete aeroelastic optimization methodology for realistic problems, an analytical sensitivity computation technique was developed and tested for quasi-static aeroelastic viscous and turbulent flow configurations. Viscous and turbulent effects are included by using an averaged discretization of the Navier-Stokes equations, coupled with an eddy viscosity turbulence model. For quasi-static aeroelastic problems, the traditional staggered solution strategy has unsatisfactory performance when applied to cases where there is a strong fluid-structure coupling. Consequently, this work also proposes a solution methodology for aeroelastic and sensitivity analyses of quasi-static problems, which is based on the fixed point of an iterative nonlinear block Gauss-Seidel scheme. The methodology can also be interpreted as the solution of the Schur complement of the aeroelastic and sensitivity analyses linearized systems of equations. The methodologies developed in this work are tested and verified by using realistic aeroelastic systems.

  8. The Relative Importance of the Vadose Zone in Multimedia Risk Assessment Modeling Applied at a National Scale: An Analysis of Benzene Using 3MRA

    NASA Astrophysics Data System (ADS)

    Babendreier, J. E.

    2002-05-01

    Evaluating uncertainty and parameter sensitivity in environmental models can be a difficult task, even for low-order, single-media constructs driven by a unique set of site-specific data. The challenge of examining ever more complex, integrated, higher-order models is a formidable one, particularly in regulatory settings applied on a national scale. Quantitative assessment of uncertainty and sensitivity within integrated, multimedia models that simulate hundreds of sites, spanning multiple geographical and ecological regions, will ultimately require a systematic, comparative approach coupled with sufficient computational power. The Multimedia, Multipathway, and Multireceptor Risk Assessment Model (3MRA) is an important code being developed by the United States Environmental Protection Agency for use in site-scale risk assessment (e.g. hazardous waste management facilities). The model currently entails over 700 variables, 185 of which are explicitly stochastic. The 3MRA can start with a chemical concentration in a waste management unit (WMU). It estimates the release and transport of the chemical throughout the environment, and predicts associated exposure and risk. The 3MRA simulates multimedia (air, water, soil, sediments), pollutant fate and transport, multipathway exposure routes (food ingestion, water ingestion, soil ingestion, air inhalation, etc.), multireceptor exposures (resident, gardener, farmer, fisher, ecological habitats and populations), and resulting risk (human cancer and non-cancer effects, ecological population and community effects). The 3MRA collates the output for an overall national risk assessment, offering a probabilistic strategy as a basis for regulatory decisions. To facilitate model execution of 3MRA for purposes of conducting uncertainty and sensitivity analysis, a PC-based supercomputer cluster was constructed. Design of SuperMUSE, a 125 GHz Windows-based Supercomputer for Model Uncertainty and Sensitivity Evaluation is described, along with the conceptual layout of an accompanying java-based paralleling software toolset. Preliminary work is also reported for a scenario involving Benzene disposal that describes the relative importance of the vadose zone in driving risk levels for ecological receptors and human health. Incorporating landfills, waste piles, aerated tanks, surface impoundments, and land application units, the site-based data used in the analysis included 201 national facilities representing 419 site-WMU combinations.

  9. Application of acetone acetals as water scavengers and derivatization agents prior to the gas chromatographic analysis of polar residual solvents in aqueous samples.

    PubMed

    van Boxtel, Niels; Wolfs, Kris; Van Schepdael, Ann; Adams, Erwin

    2015-12-18

    The sensitivity of gas chromatography (GC) combined with the full evaporation technique (FET) for the analysis of aqueous samples is limited due to the maximum tolerable sample volume in a headspace vial. Using an acetone acetal as water scavenger prior to FET-GC analysis proved to be a useful and versatile tool for the analysis of high boiling analytes in aqueous samples. 2,2-Dimethoxypropane (DMP) was used in this case resulting in methanol and acetone as reaction products with water. These solvents are relatively volatile and were easily removed by evaporation enabling sample enrichment leading to 10-fold improvement in sensitivity compared to the standard 10μL FET sample volumes for a selection of typical high boiling polar residual solvents in water. This could be improved even further if more sample is used. The method was applied for the determination of residual NMP in an aqueous solution of a cefotaxime analogue and proved to be considerably better than conventional static headspace (sHS) and the standard FET approach. The methodology was also applied to determine trace amounts of ethylene glycol (EG) in aqueous samples like contact lens fluids, where scavenging of the water would avoid laborious extraction prior to derivatization. During this experiment it was revealed that DMP reacts quantitatively with EG to form 2,2-dimethyl-1,3-dioxolane (2,2-DD) under the proposed reaction conditions. The relatively high volatility (bp 93°C) of 2,2-DD makes it possible to perform analysis of EG using the sHS methodology making additional derivatization reactions superfluous. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Rectilinear accelerometer possesses self- calibration feature

    NASA Technical Reports Server (NTRS)

    Henderson, R. B.

    1966-01-01

    Rectilinear accelerometer operates from an ac source with a phase-sensitive ac voltage output proportional to the applied accelerations. The unit includes an independent circuit for self-test which provides a sensor output simulating an acceleration applied to the sensitive axis of the accelerometer.

  11. Ranking metrics in gene set enrichment analysis: do they matter?

    PubMed

    Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna

    2017-05-12

    There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner-Weiss-Schindler test statistic gives better outcomes. Also, it finds more enriched pathways than other tested metrics, which may induce new biological discoveries.

  12. Effectiveness of urine fibronectin as a non-invasive diagnostic biomarker in bladder cancer patients: a systematic review and meta-analysis.

    PubMed

    Dong, Fan; Shen, Yifan; Xu, Tianyuan; Wang, Xianjin; Gao, Fengbin; Zhong, Shan; Chen, Shanwen; Shen, Zhoujun

    2018-03-21

    Previous researches pointed out that the measurement of urine fibronectin (Fn) could be a potential diagnostic test for bladder cancer (BCa). We conducted this meta-analysis to fully assess the diagnostic value of urine Fn for BCa detection. A systematic literature search in PubMed, ISI Web of Science, EMBASE, Cochrane library, and CBM was carried out to identify eligible studies evaluating the urine Fn in diagnosing BCa. Pooled sensitivity, specificity, and diagnostic odds ratio (DOR) with their 95% confidence intervals (CIs) were calculated, and summary receiver operating characteristic (SROC) curves were established. We applied the STATA 13.0, Meta-Disc 1.4, and RevMan 5.3 software to the meta-analysis. Eight separate studies with 744 bladder cancer patients were enrolled in this meta-analysis. The pooled sensitivity, specificity, and DOR were 0.80 (95%CI = 0.77-0.83), 0.79 (95%CI = 0.73-0.84), and 15.18 (95%CI = 10.07-22.87), respectively, and the area under the curve (AUC) of SROC was 0.83 (95%CI = 0.79-0.86). The diagnostic power of a combined method (urine Fn combined with urine cytology) was also evaluated, and its sensitivity and AUC were significantly higher (0.86 (95%CI = 0.82-0.90) and 0.89 (95%CI = 0.86-0.92), respectively). Meta-regression along with subgroup analysis based on various covariates revealed the potential sources of the heterogeneity and the detailed diagnostic value of each subgroup. Sensitivity analysis supported that the result was robust. No threshold effect and publication bias were found in this meta-analysis. Urine Fn may become a promising non-invasive biomarker for bladder cancer with a relatively satisfactory diagnostic power. And the combination of urine Fn with cytology could be an alternative option for detecting BCa in clinical practice. The potential value of urine Fn still needs to be validated in large, multi-center, and prospective studies.

  13. Near-infrared confocal micro-Raman spectroscopy combined with PCA-LDA multivariate analysis for detection of esophageal cancer

    NASA Astrophysics Data System (ADS)

    Chen, Long; Wang, Yue; Liu, Nenrong; Lin, Duo; Weng, Cuncheng; Zhang, Jixue; Zhu, Lihuan; Chen, Weisheng; Chen, Rong; Feng, Shangyuan

    2013-06-01

    The diagnostic capability of using tissue intrinsic micro-Raman signals to obtain biochemical information from human esophageal tissue is presented in this paper. Near-infrared micro-Raman spectroscopy combined with multivariate analysis was applied for discrimination of esophageal cancer tissue from normal tissue samples. Micro-Raman spectroscopy measurements were performed on 54 esophageal cancer tissues and 55 normal tissues in the 400-1750 cm-1 range. The mean Raman spectra showed significant differences between the two groups. Tentative assignments of the Raman bands in the measured tissue spectra suggested some changes in protein structure, a decrease in the relative amount of lactose, and increases in the percentages of tryptophan, collagen and phenylalanine content in esophageal cancer tissue as compared to those of a normal subject. The diagnostic algorithms based on principal component analysis (PCA) and linear discriminate analysis (LDA) achieved a diagnostic sensitivity of 87.0% and specificity of 70.9% for separating cancer from normal esophageal tissue samples. The result demonstrated that near-infrared micro-Raman spectroscopy combined with PCA-LDA analysis could be an effective and sensitive tool for identification of esophageal cancer.

  14. Direct Analysis of Low-Volatile Molecular Marker Extract from Airborne Particulate Matter Using Sensitivity Correction Method

    PubMed Central

    Irei, Satoshi

    2016-01-01

    Molecular marker analysis of environmental samples often requires time consuming preseparation steps. Here, analysis of low-volatile nonpolar molecular markers (5-6 ring polycyclic aromatic hydrocarbons or PAHs, hopanoids, and n-alkanes) without the preseparation procedure is presented. Analysis of artificial sample extracts was directly conducted by gas chromatography-mass spectrometry (GC-MS). After every sample injection, a standard mixture was also analyzed to make a correction on the variation of instrumental sensitivity caused by the unfavorable matrix contained in the extract. The method was further validated for the PAHs using the NIST standard reference materials (SRMs) and then applied to airborne particulate matter samples. Tests with the SRMs showed that overall our methodology was validated with the uncertainty of ~30%. The measurement results of airborne particulate matter (PM) filter samples showed a strong correlation between the PAHs, implying the contributions from the same emission source. Analysis of size-segregated PM filter samples showed that their size distributions were found to be in the PM smaller than 0.4 μm aerodynamic diameter. The observations were consistent with our expectation of their possible sources. Thus, the method was found to be useful for molecular marker studies. PMID:27127511

  15. Retrieval of complex χ(2) parts for quantitative analysis of sum-frequency generation intensity spectra

    PubMed Central

    Hofmann, Matthias J.; Koelsch, Patrick

    2015-01-01

    Vibrational sum-frequency generation (SFG) spectroscopy has become an established technique for in situ surface analysis. While spectral recording procedures and hardware have been optimized, unique data analysis routines have yet to be established. The SFG intensity is related to probing geometries and properties of the system under investigation such as the absolute square of the second-order susceptibility χ(2)2. A conventional SFG intensity measurement does not grant access to the complex parts of χ(2) unless further assumptions have been made. It is therefore difficult, sometimes impossible, to establish a unique fitting solution for SFG intensity spectra. Recently, interferometric phase-sensitive SFG or heterodyne detection methods have been introduced to measure real and imaginary parts of χ(2) experimentally. Here, we demonstrate that iterative phase-matching between complex spectra retrieved from maximum entropy method analysis and fitting of intensity SFG spectra (iMEMfit) leads to a unique solution for the complex parts of χ(2) and enables quantitative analysis of SFG intensity spectra. A comparison between complex parts retrieved by iMEMfit applied to intensity spectra and phase sensitive experimental data shows excellent agreement between the two methods. PMID:26450297

  16. Pressure-Sensitive Paint: Effect of Substrate

    PubMed Central

    Quinn, Mark Kenneth; Yang, Leichao; Kontis, Konstantinos

    2011-01-01

    There are numerous ways in which pressure-sensitive paint can be applied to a surface. The choice of substrate and application method can greatly affect the results obtained. The current study examines the different methods of applying pressure-sensitive paint to a surface. One polymer-based and two porous substrates (anodized aluminum and thin-layer chromatography plates) are investigated and compared for luminescent output, pressure sensitivity, temperature sensitivity and photodegradation. Two luminophores [tris-Bathophenanthroline Ruthenium(II) Perchlorate and Platinum-tetrakis (pentafluorophenyl) Porphyrin] will also be compared in all three of the substrates. The results show the applicability of the different substrates and luminophores to different testing environments. PMID:22247685

  17. The role of diagnostic laboratories in support of animal disease surveillance systems.

    PubMed

    Zepeda, C

    2007-01-01

    Diagnostic laboratories are an essential component of animal disease surveillance systems. To understand the occurrence of disease in populations, surveillance systems rely on random or targeted surveys using three approaches: clinical, serological and virological surveillance. Clinical surveillance is the basis for early detection of disease and is usually centered on the detection of syndromes and clinical findings requiring confirmation by diagnostic laboratories. Although most of the tests applied usually perform to an acceptable standard, several have not been properly validated in terms of their diagnostic sensitivity and specificity. Sensitivity and specificity estimates can vary according to local conditions and, ideally, should be determined by national laboratories where the tests are to be applied. The importance of sensitivity and specificity estimates in the design and interpretation of statistically based surveys and risk analysis is fundamental to establish appropriate disease control and prevention strategies. The World Organisation for Animal Health's (OIE) network of reference laboratories acts as centers of expertise for the diagnosis of OIE listed diseases and have a role in promoting the validation of OIE prescribed tests for international trade. This paper discusses the importance of the epidemiological evaluation of diagnostic tests and the role of the OIE Reference Laboratories and Collaborating Centres in this process.

  18. A binary search approach to whole-genome data analysis.

    PubMed

    Brodsky, Leonid; Kogan, Simon; Benjacob, Eshel; Nevo, Eviatar

    2010-09-28

    A sequence analysis-oriented binary search-like algorithm was transformed to a sensitive and accurate analysis tool for processing whole-genome data. The advantage of the algorithm over previous methods is its ability to detect the margins of both short and long genome fragments, enriched by up-regulated signals, at equal accuracy. The score of an enriched genome fragment reflects the difference between the actual concentration of up-regulated signals in the fragment and the chromosome signal baseline. The "divide-and-conquer"-type algorithm detects a series of nonintersecting fragments of various lengths with locally optimal scores. The procedure is applied to detected fragments in a nested manner by recalculating the lower-than-baseline signals in the chromosome. The algorithm was applied to simulated whole-genome data, and its sensitivity/specificity were compared with those of several alternative algorithms. The algorithm was also tested with four biological tiling array datasets comprising Arabidopsis (i) expression and (ii) histone 3 lysine 27 trimethylation CHIP-on-chip datasets; Saccharomyces cerevisiae (iii) spliced intron data and (iv) chromatin remodeling factor binding sites. The analyses' results demonstrate the power of the algorithm in identifying both the short up-regulated fragments (such as exons and transcription factor binding sites) and the long--even moderately up-regulated zones--at their precise genome margins. The algorithm generates an accurate whole-genome landscape that could be used for cross-comparison of signals across the same genome in evolutionary and general genomic studies.

  19. Mitochondrial sequence analysis for forensic identification using pyrosequencing technology.

    PubMed

    Andréasson, H; Asp, A; Alderborn, A; Gyllensten, U; Allen, M

    2002-01-01

    Over recent years, requests for mtDNA analysis in the field of forensic medicine have notably increased, and the results of such analyses have proved to be very useful in forensic cases where nuclear DNA analysis cannot be performed. Traditionally, mtDNA has been analyzed by DNA sequencing of the two hypervariable regions, HVI and HVII, in the D-loop. DNA sequence analysis using the conventional Sanger sequencing is very robust but time consuming and labor intensive. By contrast, mtDNA analysis based on the pyrosequencing technology provides fast and accurate results from the human mtDNA present in many types of evidence materials in forensic casework. The assay has been developed to determine polymorphic sites in the mitochondrial D-loop as well as the coding region to further increase the discrimination power of mtDNA analysis. The pyrosequencing technology for analysis of mtDNA polymorphisms has been tested with regard to sensitivity, reproducibility, and success rate when applied to control samples and actual casework materials. The results show that the method is very accurate and sensitive; the results are easily interpreted and provide a high success rate on casework samples. The panel of pyrosequencing reactions for the mtDNA polymorphisms were chosen to result in an optimal discrimination power in relation to the number of bases determined.

  20. Simultaneous determination of benznidazole and itraconazole using spectrophotometry applied to the analysis of mixture: A tool for quality control in the development of formulations

    NASA Astrophysics Data System (ADS)

    Pinho, Ludmila A. G.; Sá-Barreto, Lívia C. L.; Infante, Carlos M. C.; Cunha-Filho, Marcílio S. S.

    2016-04-01

    The aim of this work was the development of an analytical procedure using spectrophotometry for simultaneous determination of benznidazole (BNZ) and itraconazole (ITZ) in a medicine used for the treatment of Chagas disease. In order to achieve this goal, the analysis of mixtures was performed applying the Lambert-Beer law through the absorbances of BNZ and ITZ in the wavelengths 259 and 321 nm, respectively. Diverse tests were carried out for development and validation of the method, which proved to be selective, robust, linear, and precise. The lower limits of detection and quantification demonstrate its sensitivity to quantify small amounts of analytes, enabling its application for various analytical purposes, such as dissolution test and routine assays. In short, the quantification of BNZ and ITZ by analysis of mixtures had shown to be efficient and cost-effective alternative for determination of these drugs in a pharmaceutical dosage form.

  1. Population analysis of the cingulum bundle using the tubular surface model for schizophrenia detection

    NASA Astrophysics Data System (ADS)

    Mohan, Vandana; Sundaramoorthi, Ganesh; Kubicki, Marek; Terry, Douglas; Tannenbaum, Allen

    2010-03-01

    We propose a novel framework for population analysis of DW-MRI data using the Tubular Surface Model. We focus on the Cingulum Bundle (CB) - a major tract for the Limbic System and the main connection of the Cingulate Gyrus, which has been associated with several aspects of Schizophrenia symptomatology. The Tubular Surface Model represents a tubular surface as a center-line with an associated radius function. It provides a natural way to sample statistics along the length of the fiber bundle and reduces the registration of fiber bundle surfaces to that of 4D curves. We apply our framework to a population of 20 subjects (10 normal, 10 schizophrenic) and obtain excellent results with neural network based classification (90% sensitivity, 95% specificity) as well as unsupervised clustering (k-means). Further, we apply statistical analysis to the feature data and characterize the discrimination ability of local regions of the CB, as a step towards localizing CB regions most relevant to Schizophrenia.

  2. Thermal stress analysis of symmetric shells subjected to asymmetric thermal loads

    NASA Technical Reports Server (NTRS)

    Negaard, G. R.

    1980-01-01

    The performance of the NASTRAN level 16.0 axisymmetric solid elements when subjected to both symmetric and asymmetric thermal loading was investigated. A ceramic radome was modeled using both the CTRAPRG and the CTRAPAX elements. The thermal loading applied contained severe gradients through the thickness of the shell. Both elements were found to be more sensitive to the effect of the thermal gradient than to the aspect ratio of the elements. Analysis using the CTRAPAX element predicted much higher thermal stresses than the analysis using the CTRAPRG element, prompting studies of models for which theoretical solutions could be calculated. It was found that the CTRAPRG element solutions were satisfactory, but that the CTRAPAX element was very geometry dependent. This element produced erroneous results if the geometry was allowed to vary from a rectangular cross-section. The most satisfactory solution found for this type of problem was to model a small segment of a symmetric structure with isoparametric solid elements and apply the cyclic symmetry option in NASTRAN.

  3. Simultaneous determination of benznidazole and itraconazole using spectrophotometry applied to the analysis of mixture: A tool for quality control in the development of formulations.

    PubMed

    Pinho, Ludmila A G; Sá-Barreto, Lívia C L; Infante, Carlos M C; Cunha-Filho, Marcílio S S

    2016-04-15

    The aim of this work was the development of an analytical procedure using spectrophotometry for simultaneous determination of benznidazole (BNZ) and itraconazole (ITZ) in a medicine used for the treatment of Chagas disease. In order to achieve this goal, the analysis of mixtures was performed applying the Lambert-Beer law through the absorbances of BNZ and ITZ in the wavelengths 259 and 321 nm, respectively. Diverse tests were carried out for development and validation of the method, which proved to be selective, robust, linear, and precise. The lower limits of detection and quantification demonstrate its sensitivity to quantify small amounts of analytes, enabling its application for various analytical purposes, such as dissolution test and routine assays. In short, the quantification of BNZ and ITZ by analysis of mixtures had shown to be efficient and cost-effective alternative for determination of these drugs in a pharmaceutical dosage form. Copyright © 2016. Published by Elsevier B.V.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogura, Toshihiko, E-mail: t-ogura@aist.go.jp

    Highlights: • We developed a high-sensitive frequency transmission electric-field (FTE) system. • The output signal was highly enhanced by applying voltage to a metal layer on SiN. • The spatial resolution of new FTE method is 41 nm. • New FTE system enables observation of the intact bacteria and virus in water. - Abstract: The high-resolution structural analysis of biological specimens by scanning electron microscopy (SEM) presents several advantages. Until now, wet bacterial specimens have been examined using atmospheric sample holders. However, images of unstained specimens in water using these holders exhibit very poor contrast and heavy radiation damage. Recently,more » we developed the frequency transmission electric-field (FTE) method, which facilitates the SEM observation of biological specimens in water without radiation damage. However, the signal detection system presents low sensitivity. Therefore, a high EB current is required to generate clear images, and thus reducing spatial resolution and inducing thermal damage to the samples. Here a high-sensitivity detection system is developed for the FTE method, which enhances the output signal amplitude by hundredfold. The detection signal was highly enhanced when voltage was applied to the metal layer on silicon nitride thin film. This enhancement reduced the EB current and improved the spatial resolution as well as the signal-to-noise ratio. The spatial resolution of a high-sensitive FTE system is 41 nm, which is considerably higher than previous FTE system. New FTE system can easily be utilised to examine various unstained biological specimens in water, such as living bacteria and viruses.« less

  5. Sensitivity of decomposition rates of soil organic matter with respect to simultaneous changes in temperature and moisture

    NASA Astrophysics Data System (ADS)

    Sierra, Carlos A.; Trumbore, Susan E.; Davidson, Eric A.; Vicca, Sara; Janssens, I.

    2015-03-01

    The sensitivity of soil organic matter decomposition to global environmental change is a topic of prominent relevance for the global carbon cycle. Decomposition depends on multiple factors that are being altered simultaneously as a result of global environmental change; therefore, it is important to study the sensitivity of the rates of soil organic matter decomposition with respect to multiple and interacting drivers. In this manuscript, we present an analysis of the potential response of decomposition rates to simultaneous changes in temperature and moisture. To address this problem, we first present a theoretical framework to study the sensitivity of soil organic matter decomposition when multiple driving factors change simultaneously. We then apply this framework to models and data at different levels of abstraction: (1) to a mechanistic model that addresses the limitation of enzyme activity by simultaneous effects of temperature and soil water content, the latter controlling substrate supply and oxygen concentration for microbial activity; (2) to different mathematical functions used to represent temperature and moisture effects on decomposition in biogeochemical models. To contrast model predictions at these two levels of organization, we compiled different data sets of observed responses in field and laboratory studies. Then we applied our conceptual framework to: (3) observations of heterotrophic respiration at the ecosystem level; (4) laboratory experiments looking at the response of heterotrophic respiration to independent changes in moisture and temperature; and (5) ecosystem-level experiments manipulating soil temperature and water content simultaneously.

  6. Simultaneous Aerodynamic and Structural Design Optimization (SASDO) for a 3-D Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, Clyde R.; Hou, Gene J.-W.; Newman, Perry A.

    2001-01-01

    The formulation and implementation of an optimization method called Simultaneous Aerodynamic and Structural Design Optimization (SASDO) is shown as an extension of the Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) method. It is extended by the inclusion of structure element sizing parameters as design variables and Finite Element Method (FEM) analysis responses as constraints. The method aims to reduce the computational expense. incurred in performing shape and sizing optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, FEM structural analysis and sensitivity analysis tools. SASDO is applied to a simple. isolated, 3-D wing in inviscid flow. Results show that the method finds the saine local optimum as a conventional optimization method with some reduction in the computational cost and without significant modifications; to the analysis tools.

  7. A conceptualisation framework for building consensus on environmental sensitivity.

    PubMed

    González Del Campo, Ainhoa

    2017-09-15

    Examination of the intrinsic attributes of a system that render it more or less sensitive to potential stressors provides further insight into the baseline environment. In impact assessment, sensitivity of environmental receptors can be conceptualised on the basis of their: a) quality status according to statutory indicators and associated thresholds or targets; b) statutory protection; or c) inherent risk. Where none of these considerations are pertinent, subjective value judgments can be applied to determine sensitivity. This pragmatic conceptual framework formed the basis of a stakeholder consultation process for harmonising degrees of sensitivity of a number of environmental criteria. Harmonisation was sought to facilitate their comparative and combined analysis. Overall, full or wide agreement was reached on relative sensitivity values for the large majority of the reviewed criteria. Consensus was easier to reach on some themes (e.g. biodiversity, water and cultural heritage) than others (e.g. population and soils). As anticipated, existing statutory measures shaped the outcomes but, ultimately, knowledge-based values prevailed. The agreed relative sensitivities warrant extensive consultation but the conceptual framework provides a basis for increasing stakeholder consensus and objectivity of baseline assessments. This, in turn, can contribute to improving the evidence-base for characterising the significance of potential impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Intracavity optogalvanic spectroscopy. An analytical technique for 14C analysis with subattomole sensitivity.

    PubMed

    Murnick, Daniel E; Dogru, Ozgur; Ilkmen, Erhan

    2008-07-01

    We show a new ultrasensitive laser-based analytical technique, intracavity optogalvanic spectroscopy, allowing extremely high sensitivity for detection of (14)C-labeled carbon dioxide. Capable of replacing large accelerator mass spectrometers, the technique quantifies attomoles of (14)C in submicrogram samples. Based on the specificity of narrow laser resonances coupled with the sensitivity provided by standing waves in an optical cavity and detection via impedance variations, limits of detection near 10(-15) (14)C/(12)C ratios are obtained. Using a 15-W (14)CO2 laser, a linear calibration with samples from 10(-15) to >1.5 x 10(-12) in (14)C/(12)C ratios, as determined by accelerator mass spectrometry, is demonstrated. Possible applications include microdosing studies in drug development, individualized subtherapeutic tests of drug metabolism, carbon dating and real time monitoring of atmospheric radiocarbon. The method can also be applied to detection of other trace entities.

  9. Accuracy of screening women at familial risk of breast cancer without a known gene mutation: Individual patient data meta-analysis.

    PubMed

    Phi, Xuan-Anh; Houssami, Nehmat; Hooning, Maartje J; Riedl, Christopher C; Leach, Martin O; Sardanelli, Francesco; Warner, Ellen; Trop, Isabelle; Saadatmand, Sepideh; Tilanus-Linthorst, Madeleine M A; Helbich, Thomas H; van den Heuvel, Edwin R; de Koning, Harry J; Obdeijn, Inge-Marie; de Bock, Geertruida H

    2017-11-01

    Women with a strong family history of breast cancer (BC) and without a known gene mutation have an increased risk of developing BC. We aimed to investigate the accuracy of screening using annual mammography with or without magnetic resonance imaging (MRI) for these women outside the general population screening program. An individual patient data (IPD) meta-analysis was conducted using IPD from six prospective screening trials that had included women at increased risk for BC: only women with a strong familial risk for BC and without a known gene mutation were included in this analysis. A generalised linear mixed model was applied to estimate and compare screening accuracy (sensitivity, specificity and predictive values) for annual mammography with or without MRI. There were 2226 women (median age: 41 years, interquartile range 35-47) with 7478 woman-years of follow-up, with a BC rate of 12 (95% confidence interval 9.3-14) in 1000 woman-years. Mammography screening had a sensitivity of 55% (standard error of mean [SE] 7.0) and a specificity of 94% (SE 1.3). Screening with MRI alone had a sensitivity of 89% (SE 4.6) and a specificity of 83% (SE 2.8). Adding MRI to mammography increased sensitivity to 98% (SE 1.8, P < 0.01 compared to mammography alone) but lowered specificity to 79% (SE 2.7, P < 0.01 compared with mammography alone). In this population of women with strong familial BC risk but without a known gene mutation, in whom BC incidence was high both before and after age 50, adding MRI to mammography substantially increased screening sensitivity but also decreased its specificity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Analysis of the sensitivity of soils to the leaching of agricultural pesticides in Ohio

    USGS Publications Warehouse

    Schalk, C.W.

    1998-01-01

    Pesticides have not been found frequently in the ground waters of Ohio even though large amounts of agricultural pesticides are applied to fields in Ohio every year. State regulators, including representatives from Ohio Environmental Protection Agency and Departments of Agriculture, Health, and Natural Resources, are striving to limit the presence of pesticides in ground water at a minimum. A proposed pesticide management plan for the State aims at protecting Ohio's ground water by assessing pesticide-leaching potential using geographic information system (GIS) technology and invoking a monitoring plan that targets aquifers deemed most likely to be vulnerable to pesticide leaching. The U.S. Geological Survey, in cooperation with Ohio Department of Agriculture, assessed the sensitivity of mapped soil units in Ohio to pesticide leaching. A soils data base (STATSGO) compiled by U.S. Department of Agriculture was used iteratively to estimate soil units as being of high to low sensitivity on the basis of soil permeability, clay content, and organic-matter content. Although this analysis did not target aquifers directly, the results can be used as a first estimate of areas most likely to be subject to pesticide contamination from normal agricultural practices. High-sensitivity soil units were found in lakefront areas and former lakefront beach ridges, buried valleys in several river basins, and parts of central and south- central Ohio. Medium-high-sensitivity soil units were found in other river basins, along Lake Erie in north-central Ohio, and in many of the upland areas of the Muskingum River Basin. Low-sensitivity map units dominated the northwestern quadrant of Ohio.

  11. Biased and less sensitive: A gamified approach to delay discounting in heroin addiction.

    PubMed

    Scherbaum, Stefan; Haber, Paul; Morley, Kirsten; Underhill, Dylan; Moustafa, Ahmed A

    2018-03-01

    People with addiction will continue to use drugs despite adverse long-term consequences. We hypothesized (a) that this deficit persists during substitution treatment, and (b) that this deficit might be related not only to a desire for immediate gratification, but also to a lower sensitivity for optimal decision making. We investigated how individuals with a history of heroin addiction perform (compared to healthy controls) in a virtual reality delay discounting task. This novel task adds to established measures of delay discounting an assessment of the optimality of decisions, especially in how far decisions are influenced by a general choice bias and/or a reduced sensitivity to the relative value of the two alternative rewards. We used this measure of optimality to apply diffusion model analysis to the behavioral data to analyze the interaction between decision optimality and reaction time. The addiction group consisted of 25 patients with a history of heroin dependency currently participating in a methadone maintenance program; the control group consisted of 25 healthy participants with no history of substance abuse, who were recruited from the Western Sydney community. The patient group demonstrated greater levels of delay discounting compared to the control group, which is broadly in line with previous observations. Diffusion model analysis yielded a reduced sensitivity for the optimality of a decision in the patient group compared to the control group. This reduced sensitivity was reflected in lower rates of information accumulation and higher decision criteria. Increased discounting in individuals with heroin addiction is related not only to a generally increased bias to immediate gratification, but also to reduced sensitivity for the optimality of a decision. This finding is in line with other findings about the sensitivity of addicts in distinguishing optimal from nonoptimal choice options.

  12. Analysis of the mechanics and deformation characteristics of optical fiber acceleration sensor

    NASA Astrophysics Data System (ADS)

    Liu, Zong-kai; Bo, Yu-ming; Zhou, Ben-mou; Wang, Jun; Huang, Ya-dong

    2016-10-01

    The optical fiber sensor holds many advantages such as smaller volume, lighter weight, higher sensitivity, and stronger anti-interference ability, etc. It can be applied to oil exploration to improve the exploration efficiency, since the underground petroleum distribution can be obtained by detecting and analyzing the echo signals. In this paper, the cantilever beam optical fiber sensor was mainly investigated. Specifically, the finite element analysis method is applied to the numerical analysis of the changes and relations of the optical fiber rail slot elongation on the surface of the PC material fiber winding plate along with the changes of time and power under the action of sine force. The analysis results show that, when the upper and lower quality blocks are under the action of sine force, the cantilever beam optical fiber sensor structure can basically produce synchronized deformation along with the force. And the optical fiber elongation length basically has a linear relationship with the sine force within the time ranges of 0.2 0.4 and 0.6 0.8, which would be beneficial for the subsequent signal acquisition and data processing.

  13. Nonlinear single-spin spectrum analyzer.

    PubMed

    Kotler, Shlomi; Akerman, Nitzan; Glickman, Yinnon; Ozeri, Roee

    2013-03-15

    Qubits have been used as linear spectrum analyzers of their environments. Here we solve the problem of nonlinear spectral analysis, required for discrete noise induced by a strongly coupled environment. Our nonperturbative analytical model shows a nonlinear signal dependence on noise power, resulting in a spectral resolution beyond the Fourier limit as well as frequency mixing. We develop a noise characterization scheme adapted to this nonlinearity. We then apply it using a single trapped ion as a sensitive probe of strong, non-Gaussian, discrete magnetic field noise. Finally, we experimentally compared the performance of equidistant vs Uhrig modulation schemes for spectral analysis.

  14. Integrated Data Collection and Analysis Project: Friction Correlation Study

    DTIC Science & Technology

    2015-08-01

    methods authorized in AOP-7 include Pendulum Friction, Rotary Friction, Sliding Friction (ABL), BAM Friction and Steel/Fiber Shoe Methods. The...sensitivity can be obtained by Pendulum Friction, Rotary Friction, Sliding Friction (such as the ABL), BAM Friction and Steel/Fiber Shoe Methods.3, 4 Within...Figure 4.16 A variable compressive force is applied downward through the wheel hydraulically (50-1995 psi). The 5 kg pendulum impacts (8 ft/sec is the

  15. Mixture models in diagnostic meta-analyses--clustering summary receiver operating characteristic curves accounted for heterogeneity and correlation.

    PubMed

    Schlattmann, Peter; Verba, Maryna; Dewey, Marc; Walther, Mario

    2015-01-01

    Bivariate linear and generalized linear random effects are frequently used to perform a diagnostic meta-analysis. The objective of this article was to apply a finite mixture model of bivariate normal distributions that can be used for the construction of componentwise summary receiver operating characteristic (sROC) curves. Bivariate linear random effects and a bivariate finite mixture model are used. The latter model is developed as an extension of a univariate finite mixture model. Two examples, computed tomography (CT) angiography for ruling out coronary artery disease and procalcitonin as a diagnostic marker for sepsis, are used to estimate mean sensitivity and mean specificity and to construct sROC curves. The suggested approach of a bivariate finite mixture model identifies two latent classes of diagnostic accuracy for the CT angiography example. Both classes show high sensitivity but mainly two different levels of specificity. For the procalcitonin example, this approach identifies three latent classes of diagnostic accuracy. Here, sensitivities and specificities are quite different as such that sensitivity increases with decreasing specificity. Additionally, the model is used to construct componentwise sROC curves and to classify individual studies. The proposed method offers an alternative approach to model between-study heterogeneity in a diagnostic meta-analysis. Furthermore, it is possible to construct sROC curves even if a positive correlation between sensitivity and specificity is present. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Evaluation and construction of diagnostic criteria for inclusion body myositis

    PubMed Central

    Mammen, Andrew L.; Amato, Anthony A.; Weiss, Michael D.; Needham, Merrilee

    2014-01-01

    Objective: To use patient data to evaluate and construct diagnostic criteria for inclusion body myositis (IBM), a progressive disease of skeletal muscle. Methods: The literature was reviewed to identify all previously proposed IBM diagnostic criteria. These criteria were applied through medical records review to 200 patients diagnosed as having IBM and 171 patients diagnosed as having a muscle disease other than IBM by neuromuscular specialists at 2 institutions, and to a validating set of 66 additional patients with IBM from 2 other institutions. Machine learning techniques were used for unbiased construction of diagnostic criteria. Results: Twenty-four previously proposed IBM diagnostic categories were identified. Twelve categories all performed with high (≥97%) specificity but varied substantially in their sensitivities (11%–84%). The best performing category was European Neuromuscular Centre 2013 probable (sensitivity of 84%). Specialized pathologic features and newly introduced strength criteria (comparative knee extension/hip flexion strength) performed poorly. Unbiased data-directed analysis of 20 features in 371 patients resulted in construction of higher-performing data-derived diagnostic criteria (90% sensitivity and 96% specificity). Conclusions: Published expert consensus–derived IBM diagnostic categories have uniformly high specificity but wide-ranging sensitivities. High-performing IBM diagnostic category criteria can be developed directly from principled unbiased analysis of patient data. Classification of evidence: This study provides Class II evidence that published expert consensus–derived IBM diagnostic categories accurately distinguish IBM from other muscle disease with high specificity but wide-ranging sensitivities. PMID:24975859

  17. Evidence in obese children: contribution of hyperlipidemia, obesity-inflammation, and insulin sensitivity.

    PubMed

    Chang, Chi-Jen; Jian, Deng-Yuan; Lin, Ming-Wei; Zhao, Jun-Zhi; Ho, Low-Tone; Juan, Chi-Chang

    2015-01-01

    Evidence shows a high incidence of insulin resistance, inflammation and dyslipidemia in adult obesity. The aim of this study was to assess the relevance of inflammatory markers, circulating lipids, and insulin sensitivity in overweight/obese children. We enrolled 45 male children (aged 6 to 13 years, lean control = 16, obese = 19, overweight = 10) in this study. The plasma total cholesterol, HDL cholesterol, triglyceride, glucose and insulin levels, the circulating levels of inflammatory factors, such as TNF-α, IL-6, and MCP-1, and the high-sensitive CRP level were determined using quantitative colorimetric sandwich ELISA kits. Compared with the lean control subjects, the obese subjects had obvious insulin resistance, abnormal lipid profiles, and low-grade inflammation. The overweight subjects only exhibited significant insulin resistance and low-grade inflammation. Both TNF-α and leptin levels were higher in the overweight/obese subjects. A concurrent correlation analysis showed that body mass index (BMI) percentile and fasting insulin were positively correlated with insulin resistance, lipid profiles, and inflammatory markers but negatively correlated with adiponectin. A factor analysis identified three domains that explained 74.08% of the total variance among the obese children (factor 1: lipid, 46.05%; factor 2: obesity-inflammation, 15.38%; factor 3: insulin sensitivity domains, 12.65%). Our findings suggest that lipid, obesity-inflammation, and insulin sensitivity domains predominantly exist among obese children. These factors might be applied to predict the outcomes of cardiovascular diseases in the future.

  18. Shock load analysis of rotor for rolling element bearings and gas foil bearings: A comparative study

    NASA Astrophysics Data System (ADS)

    Bhore, Skylab Paulas

    2018-04-01

    In this paper, a comparative study on the shock load analysis of rotor supported by rolling element bearings and gas foil journal bearings is presented. The rotor bearing system is modeled using finite element method. Timoshenko beam element with 4 degree of freedom at each node is used. The shock load is represented by half sine pulse and applied to the base of the rotor bearing system. The stiffness and damping coefficient of the bearings are incorporated in the model. The generalized equation of motion of rotor bearing system is solved by Newmark beta method and responses of rotor at bearing position are predicted. It is observed that the responses are sensitive to the direction of applied excitation and its magnitude and pulse duration. The amplitude of responses of rotor supported on gas foil bearings are significantly less than that of rolling element bearings.

  19. Nuclear quadrupole resonance lineshape analysis for different motional models: Stochastic Liouville approach

    NASA Astrophysics Data System (ADS)

    Kruk, D.; Earle, K. A.; Mielczarek, A.; Kubica, A.; Milewska, A.; Moscicki, J.

    2011-12-01

    A general theory of lineshapes in nuclear quadrupole resonance (NQR), based on the stochastic Liouville equation, is presented. The description is valid for arbitrary motional conditions (particularly beyond the valid range of perturbation approaches) and interaction strengths. It can be applied to the computation of NQR spectra for any spin quantum number and for any applied magnetic field. The treatment presented here is an adaptation of the "Swedish slow motion theory," [T. Nilsson and J. Kowalewski, J. Magn. Reson. 146, 345 (2000), 10.1006/jmre.2000.2125] originally formulated for paramagnetic systems, to NQR spectral analysis. The description is formulated for simple (Brownian) diffusion, free diffusion, and jump diffusion models. The two latter models account for molecular cooperativity effects in dense systems (such as liquids of high viscosity or molecular glasses). The sensitivity of NQR slow motion spectra to the mechanism of the motional processes modulating the nuclear quadrupole interaction is discussed.

  20. Temporal Fourier analysis applied to equilibrium radionuclide cineangiography. Importance in the study of global and regional left ventricular wall motion.

    PubMed

    Cardot, J C; Berthout, P; Verdenet, J; Bidet, A; Faivre, R; Bassand, J P; Bidet, R; Maurat, J P

    1982-01-01

    Regional and global left ventricular wall motion was assessed in 120 patients using radionuclide cineangiography (RCA) and contrast angiography. Functional imaging procedures based on a temporal Fourier analysis of dynamic image sequences were applied to the study of cardiac contractility. Two images were constructed by taking the phase and amplitude values of the first harmonic in the Fourier transform for each pixel. These two images aided in determining the perimeter of the left ventricle to calculate the global ejection fraction. Regional left ventricular wall motion was studied by analyzing the phase value and by examining the distribution histogram of these values. The accuracy of global ejection fraction calculation was improved by the Fourier technique. This technique increased the sensitivity of RCA for determining segmental abnormalities especially in the left anterior oblique view (LAO).

  1. [Determination of aluminum in sediments by atomic absorption spectrophotometer without FIA spectrophotometric analysis].

    PubMed

    Zhao, Zhen-yi; Han, Guang-xi; Song, Xi-ming; Luo, Zhi-xiong

    2008-06-01

    To search for a new method of determining, we developed a new flow injection analyzer, applied to the atomic absorption spectrophotometer, relying on it without flame in place of visible spectrophotometer, and studied the appropriate condition for the determination of aluminum in sediments, thus built up a kind of new analytical test technique. Three peak and two valley absorption values (A1, A2, A3, A4 and A5) can be continuously obtained simultaneously that all can be used for quantitative analysis, then we discussed its theory and experiment technique. Based on the additivity of absorbance (A = A1+A2+A3+A4+ A5), the sensitivity of FIA is enhanced, and its precision and linear relation are also good, raising the efficiency of AAS. The simple method has been applied to determining Al in sediments, and the results are satisfactory.

  2. Photobiomodulation in the Prevention of Tooth Sensitivity Caused by In-Office Dental Bleaching. A Randomized Placebo Preliminary Study.

    PubMed

    Calheiros, Andrea Paiva Corsetti; Moreira, Maria Stella; Gonçalves, Flávia; Aranha, Ana Cecília Correa; Cunha, Sandra Ribeiro; Steiner-Oliveira, Carolina; Eduardo, Carlos de Paula; Ramalho, Karen Müller

    2017-08-01

    Analyze the effect of photobiomodulation in the prevention of tooth sensitivity after in-office dental bleaching. Tooth sensitivity is a common clinical consequence of dental bleaching. Therapies for prevention of sensitivity have been investigated in literature. This study was developed as a randomized, placebo blind clinical trial. Fifty patients were selected (n = 10) and randomly divided into five groups: (1) control, (2) placebo, (3) laser before bleaching, (4) laser after bleaching, and (5) laser before and after bleaching. Irradiation was performed perpendicularly, in contact, on each tooth during 10 sec per point in two points. The first point was positioned in the middle of the tooth crown and the second in the periapical region. Photobiomodulation was applied using the following parameters: 780 nm, 40 mW, 10 J/cm 2 , 0.4 J per point. Pain was analyzed before, immediately after, and seven subsequent days after bleaching. Patients were instructed to report pain using the scale: 0 = no tooth sensitivity, 1 = gentle sensitivity, 2 = moderate sensitivity, 3 = severe sensitivity. There were no statistical differences between groups at any time (p > 0.05). More studies, with others parameters and different methods of tooth sensitivity analysis, should be performed to complement the results found. Within the limitation of the present study, the laser parameters of photobiomodulation tested in the present study were not efficient in preventing tooth sensitivity after in-office bleaching.

  3. Sensitivity analysis of a sediment dynamics model applied in a Mediterranean river basin: global change and management implications.

    PubMed

    Sánchez-Canales, M; López-Benito, A; Acuña, V; Ziv, G; Hamel, P; Chaplin-Kramer, R; Elorza, F J

    2015-01-01

    Climate change and land-use change are major factors influencing sediment dynamics. Models can be used to better understand sediment production and retention by the landscape, although their interpretation is limited by large uncertainties, including model parameter uncertainties. The uncertainties related to parameter selection may be significant and need to be quantified to improve model interpretation for watershed management. In this study, we performed a sensitivity analysis of the InVEST (Integrated Valuation of Environmental Services and Tradeoffs) sediment retention model in order to determine which model parameters had the greatest influence on model outputs, and therefore require special attention during calibration. The estimation of the sediment loads in this model is based on the Universal Soil Loss Equation (USLE). The sensitivity analysis was performed in the Llobregat basin (NE Iberian Peninsula) for exported and retained sediment, which support two different ecosystem service benefits (avoided reservoir sedimentation and improved water quality). Our analysis identified the model parameters related to the natural environment as the most influential for sediment export and retention. Accordingly, small changes in variables such as the magnitude and frequency of extreme rainfall events could cause major changes in sediment dynamics, demonstrating the sensitivity of these dynamics to climate change in Mediterranean basins. Parameters directly related to human activities and decisions (such as cover management factor, C) were also influential, especially for sediment exported. The importance of these human-related parameters in the sediment export process suggests that mitigation measures have the potential to at least partially ameliorate climate-change driven changes in sediment exportation. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Sensitivity analysis of Monju using ERANOS with JENDL-4.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamagno, P.; Van Rooijen, W. F. G.; Takeda, T.

    2012-07-01

    This paper deals with sensitivity analysis using JENDL-4.0 nuclear data applied to the Monju reactor. In 2010 the Japan Atomic Energy Agency - JAEA - released a new set of nuclear data: JENDL-4.0. This new evaluation is expected to contain improved data on actinides and covariance matrices. Covariance matrices are a key point in quantification of uncertainties due to basic nuclear data. For sensitivity analysis, the well-established ERANOS [1] code was chosen because of its integrated modules that allow users to perform a sensitivity analysis of complex reactor geometries. A JENDL-4.0 cross-section library is not available for ERANOS. Therefore amore » cross-section library had to be made from the original nuclear data set, available as ENDF formatted files. This is achieved by using the following codes: NJOY, CALENDF, MERGE and GECCO in order to create a library for the ECCO cell code (part of ERANOS). In order to make sure of the accuracy of the new ECCO library, two benchmark experiments have been analyzed: the MZA and MZB cores of the MOZART program measured at the ZEBRA facility in the UK. These were chosen due to their similarity to the Monju core. Using the JENDL-4.0 ECCO library we have analyzed the criticality of Monju during the restart in 2010. We have obtained good agreement with the measured criticality. Perturbation calculations have been performed between JENDL-3.3 and JENDL-4.0 based models. The isotopes {sup 239}Pu, {sup 238}U, {sup 241}Am and {sup 241}Pu account for a major part of observed differences. (authors)« less

  5. Skeletal mechanism generation for surrogate fuels using directed relation graph with error propagation and sensitivity analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niemeyer, Kyle E.; Sung, Chih-Jen; Raju, Mandhapati P.

    2010-09-15

    A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination ofmore » the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high-temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article. (author)« less

  6. A sensitive and innovative detection method for rapid C-reactive proteins analysis based on a micro-fluxgate sensor system

    PubMed Central

    Yang, Zhen; Zhi, Shaotao; Feng, Zhu; Lei, Chong; Zhou, Yong

    2018-01-01

    A sensitive and innovative assay system based on a micro-MEMS-fluxgate sensor and immunomagnetic beads-labels was developed for the rapid analysis of C-reactive proteins (CRP). The fluxgate sensor presented in this study was fabricated through standard micro-electro-mechanical system technology. A multi-loop magnetic core made of Fe-based amorphous ribbon was employed as the sensing element, and 3-D solenoid copper coils were used to control the sensing core. Antibody-conjugated immunomagnetic microbeads were strategically utilized as signal tags to label the CRP via the specific conjugation of CRP to polyclonal CRP antibodies. Separate Au film substrates were applied as immunoplatforms to immobilize CRP-beads labels through classical sandwich assays. Detection and quantification of the CRP at different concentrations were implemented by detecting the stray field of CRP labeled magnetic beads using the newly-developed micro-fluxgate sensor. The resulting system exhibited the required sensitivity, stability, reproducibility, and selectivity. A detection limit as low as 0.002 μg/mL CRP with a linearity range from 0.002 μg/mL to 10 μg/mL was achieved, and this suggested that the proposed biosystem possesses high sensitivity. In addition to the extremely low detection limit, the proposed method can be easily manipulated and possesses a quick response time. The response time of our sensor was less than 5 s, and the entire detection period for CRP analysis can be completed in less than 30 min using the current method. Given the detection performance and other advantages such as miniaturization, excellent stability and specificity, the proposed biosensor can be considered as a potential candidate for the rapid analysis of CRP, especially for point-of-care platforms. PMID:29601593

  7. Sewer deterioration modeling with condition data lacking historical records.

    PubMed

    Egger, C; Scheidegger, A; Reichert, P; Maurer, M

    2013-11-01

    Accurate predictions of future conditions of sewer systems are needed for efficient rehabilitation planning. For this purpose, a range of sewer deterioration models has been proposed which can be improved by calibration with observed sewer condition data. However, if datasets lack historical records, calibration requires a combination of deterioration and sewer rehabilitation models, as the current state of the sewer network reflects the combined effect of both processes. Otherwise, physical sewer lifespans are overestimated as pipes in poor condition that were rehabilitated are no longer represented in the dataset. We therefore propose the combination of a sewer deterioration model with a simple rehabilitation model which can be calibrated with datasets lacking historical information. We use Bayesian inference for parameter estimation due to the limited information content of the data and limited identifiability of the model parameters. A sensitivity analysis gives an insight into the model's robustness against the uncertainty of the prior. The analysis reveals that the model results are principally sensitive to the means of the priors of specific model parameters, which should therefore be elicited with care. The importance sampling technique applied for the sensitivity analysis permitted efficient implementation for regional sensitivity analysis with reasonable computational outlay. Application of the combined model with both simulated and real data shows that it effectively compensates for the bias induced by a lack of historical data. Thus, the novel approach makes it possible to calibrate sewer pipe deterioration models even when historical condition records are lacking. Since at least some prior knowledge of the model parameters is available, the strength of Bayesian inference is particularly evident in the case of small datasets. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Lack of manifestations of diazoxide/5-hydroxydecanoate-sensitive KATP channel in rat brain nonsynaptosomal mitochondria.

    PubMed

    Brustovetsky, Tatiana; Shalbuyeva, Natalia; Brustovetsky, Nickolay

    2005-10-01

    Pharmacological modulation of the mitochondrial ATP-sensitive K+ channel (mitoKATP) sensitive to diazoxide and 5-hydroxydecanoate (5-HD) represents an attractive strategy to protect cells against ischaemia/reperfusion- and stroke-related injury. To re-evaluate a functional role for the mitoKATP in brain, we used Percoll-gradient-purified brain nonsynaptosomal mitochondria in a light absorbance assay, in radioisotope measurements of matrix volume, and in measurements of respiration, membrane potential (DeltaPsi) and depolarization-induced K+ efflux. The changes in mitochondrial morphology were evaluated by transmission electron microscopy (TEM). Polyclonal antibodies raised against certain fragments of known sulphonylurea receptor subunits, SUR1 and SUR2, and against different epitopes of K+ inward rectifier subunits Kir 6.1 and Kir 6.2 of the ATP-sensitive K+ channel of the plasma membrane (cellKATP), were employed to detect similar subunits in brain mitochondria. A variety of plausible blockers (ATP, 5-hydroxydecanoate, glibenclamide, tetraphenylphosphonium cation) and openers (diazoxide, pinacidil, chromakalim, minoxidil, testosterone) of the putative mitoKATP were applied to show the role of the channel in regulating matrix volume, respiration, and DeltaPsi and K+ fluxes across the inner mitochondrial membrane. None of the pharmacological agents applied to brain mitochondria in the various assays pinpointed processes that could be unequivocally associated with mitoKATP activity. In addition, immunoblotting analysis did not provide explicit evidence for the presence of the mitoKATP, similar to the cellKATP, in brain mitochondria. On the other hand, the depolarization-evoked release of K+ suppressed by ATP could be re-activated by carboxyatractyloside, an inhibitor of the adenine nucleotide translocase (ANT). Moreover, bongkrekic acid, another inhibitor of the ANT, inhibited K+ efflux similarly to ATP. These observations implicate the ANT in ATP-sensitive K+ transport in brain mitochondria.

  9. A sensitivity analysis for a thermomechanical model of the Antarctic ice sheet and ice shelves

    NASA Astrophysics Data System (ADS)

    Baratelli, F.; Castellani, G.; Vassena, C.; Giudici, M.

    2012-04-01

    The outcomes of an ice sheet model depend on a number of parameters and physical quantities which are often estimated with large uncertainty, because of lack of sufficient experimental measurements in such remote environments. Therefore, the efforts to improve the accuracy of the predictions of ice sheet models by including more physical processes and interactions with atmosphere, hydrosphere and lithosphere can be affected by the inaccuracy of the fundamental input data. A sensitivity analysis can help to understand which are the input data that most affect the different predictions of the model. In this context, a finite difference thermomechanical ice sheet model based on the Shallow-Ice Approximation (SIA) and on the Shallow-Shelf Approximation (SSA) has been developed and applied for the simulation of the evolution of the Antarctic ice sheet and ice shelves for the last 200 000 years. The sensitivity analysis of the model outcomes (e.g., the volume of the ice sheet and of the ice shelves, the basal melt rate of the ice sheet, the mean velocity of the Ross and Ronne-Filchner ice shelves, the wet area at the base of the ice sheet) with respect to the model parameters (e.g., the basal sliding coefficient, the geothermal heat flux, the present-day surface accumulation and temperature, the mean ice shelves viscosity, the melt rate at the base of the ice shelves) has been performed by computing three synthetic numerical indices: two local sensitivity indices and a global sensitivity index. Local sensitivity indices imply a linearization of the model and neglect both non-linear and joint effects of the parameters. The global variance-based sensitivity index, instead, takes into account the complete variability of the input parameters but is usually conducted with a Monte Carlo approach which is computationally very demanding for non-linear complex models. Therefore, the global sensitivity index has been computed using a development of the model outputs in a neighborhood of the reference parameter values with a second-order approximation. The comparison of the three sensitivity indices proved that the approximation of the non-linear model with a second-order expansion is sufficient to show some differences between the local and the global indices. As a general result, the sensitivity analysis showed that most of the model outcomes are mainly sensitive to the present-day surface temperature and accumulation, which, in principle, can be measured more easily (e.g., with remote sensing techniques) than the other input parameters considered. On the other hand, the parameters to which the model resulted less sensitive are the basal sliding coefficient and the mean ice shelves viscosity.

  10. Assessment of cardiac fibrosis: a morphometric method comparison for collagen quantification.

    PubMed

    Schipke, Julia; Brandenberger, Christina; Rajces, Alexandra; Manninger, Martin; Alogna, Alessio; Post, Heiner; Mühlfeld, Christian

    2017-04-01

    Fibrotic remodeling of the heart is a frequent condition linked to various diseases and cardiac dysfunction. Collagen quantification is an important objective in cardiac fibrosis research; however, a variety of different histological methods are currently used that may differ in accuracy. Here, frequently applied collagen quantification techniques were compared. A porcine model of early stage heart failure with preserved ejection fraction was used as an example. Semiautomated threshold analyses were imprecise, mainly due to inclusion of noncollagen structures or failure to detect certain collagen deposits. In contrast, collagen assessment by automated image analysis and light microscopy (LM)-stereology was more sensitive. Depending on the quantification method, the amount of estimated collagen varied and influenced intergroup comparisons. PicroSirius Red, Masson's trichrome, and Azan staining protocols yielded similar results, whereas the measured collagen area increased with increasing section thickness. Whereas none of the LM-based methods showed significant differences between the groups, electron microscopy (EM)-stereology revealed a significant collagen increase between cardiomyocytes in the experimental group, but not at other localizations. In conclusion, in contrast to the staining protocol, section thickness and the quantification method being used directly influence the estimated collagen content and thus, possibly, intergroup comparisons. EM in combination with stereology is a precise and sensitive method for collagen quantification if certain prerequisites are considered. For subtle fibrotic alterations, consideration of collagen localization may be necessary. Among LM methods, LM-stereology and automated image analysis are appropriate to quantify fibrotic changes, the latter depending on careful control of algorithm and comparable section staining. NEW & NOTEWORTHY Direct comparison of frequently applied histological fibrosis assessment techniques revealed a distinct relation of measured collagen and utilized quantification method as well as section thickness. Besides electron microscopy-stereology, which was precise and sensitive, light microscopy-stereology and automated image analysis proved to be appropriate for collagen quantification. Moreover, consideration of collagen localization might be important in revealing minor fibrotic changes. Copyright © 2017 the American Physiological Society.

  11. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    NASA Astrophysics Data System (ADS)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.

  12. How Sensitive Is the Elasticity of Hydroxyapatite-Nanoparticle-Reinforced Chitosan Composite to Changes in Particle Concentration and Crystallization Temperature?

    PubMed

    Wang, Kean; Liao, Kin; Goh, Kheng Lim

    2015-10-10

    Hydroxyapatite (HA) nanoparticle-reinforced chitosan composites are biocompatible and biodegradable structural materials that are used as biomaterials in tissue engineering. However, in order for these materials to function effectively as intended, e.g., to provide adequate structural support for repairing damaged tissues, it is necessary to analyse and optimise the material processing parameters that affect the relevant mechanical properties. Here we are concerned with the strength, stiffness and toughness of wet-spun HA-reinforced chitosan fibres. Unlike previous studies which have addressed each of these parameters as singly applied treatments, we have carried out an experiment designed using a two-factor analysis of variance to study the main effects of two key material processing parameters, namely HA concentration and crystallization temperature, and their interactions on the respective mechanical properties of the composite fibres. The analysis reveals that significant interaction occurs between the crystallization temperature and HA concentration. Starting at a low HA concentration level, the magnitude of the respective mechanical properties decreases significantly with increasing HA concentration until a critical HA concentration is reached, at around 0.20-0.30 (HA mass fraction), beyond which the magnitude of the mechanical properties increases significantly with HA concentration. The sensitivity of the mechanical properties to crystallization temperature is masked by the interaction between the two parameters-further analysis reveals that the dependence on crystallization temperature is significant in at least some levels of HA concentration. The magnitude of the mechanical properties of the chitosan composite fibre corresponding to 40 °C is higher than that at 100 °C at low HA concentration; the reverse applies at high HA concentration. In conclusion, the elasticity of the HA nanoparticle-reinforced chitosan composite fibre is sensitive to HA concentration and crystallization temperature, and there exists a critical concentration level whereby the magnitude of the mechanical property is a minimum.

  13. Cost-Risk Trade-off of Solar Radiation Management and Mitigation under Probabilistic Information on Climate Sensitivity

    NASA Astrophysics Data System (ADS)

    Khabbazan, Mohammad Mohammadi; Roshan, Elnaz; Held, Hermann

    2017-04-01

    In principle solar radiation management (SRM) offers an option to ameliorate anthropogenic temperature rise. However we cannot expect it to simultaneously compensate for anthropogenic changes in further climate variables in a perfect manner. Here, we ask to what extent a proponent of the 2°C-temperature target would apply SRM in conjunction with mitigation in view of global or regional disparities in precipitation changes. We apply cost-risk analysis (CRA), which is a decision analytic framework that makes a trade-off between the expected welfare-loss from climate policy costs and the climate risks from transgressing a climate target. Here, in both global-scale and 'Giorgi'-regional-scale analyses, we evaluate the optimal mixture of SRM and mitigation under probabilistic information about climate sensitivity. To do so, we generalize CRA for the sake of including not only temperature risk, but also globally aggregated and regionally disaggregated precipitation risks. Social welfare is maximized for the following three valuation scenarios: temperature-risk-only, precipitation-risk-only, and equally weighted both-risks. For now, the Giorgi regions are treated by equal weight. We find that for regionally differentiated precipitation targets, the usage of SRM will be comparably more restricted. In the course of time, a cooling of up to 1.3°C can be attributed to SRM for the latter scenario and for a median climate sensitivity of 3°C (for a global target only, this number reduces by 0.5°C). Our results indicate that although SRM would almost completely substitute for mitigation in the globally aggregated analysis, it only saves 70% to 75% of the welfare-loss compared to a purely mitigation-based analysis (from economic costs and climate risks, approximately 4% in terms of BGE) when considering regional precipitation risks in precipitation-risk-only and both-risks scenarios. It remains to be shown how the inclusion of further risks or different regional weights would change that picture.

  14. Sensitivity analysis of ecosystem service valuation in a Mediterranean watershed.

    PubMed

    Sánchez-Canales, María; López Benito, Alfredo; Passuello, Ana; Terrado, Marta; Ziv, Guy; Acuña, Vicenç; Schuhmacher, Marta; Elorza, F Javier

    2012-12-01

    The services of natural ecosystems are clearly very important to our societies. In the last years, efforts to conserve and value ecosystem services have been fomented. By way of illustration, the Natural Capital Project integrates ecosystem services into everyday decision making around the world. This project has developed InVEST (a system for Integrated Valuation of Ecosystem Services and Tradeoffs). The InVEST model is a spatially integrated modelling tool that allows us to predict changes in ecosystem services, biodiversity conservation and commodity production levels. Here, InVEST model is applied to a stakeholder-defined scenario of land-use/land-cover change in a Mediterranean region basin (the Llobregat basin, Catalonia, Spain). Of all InVEST modules and sub-modules, only the behaviour of the water provisioning one is investigated in this article. The main novel aspect of this work is the sensitivity analysis (SA) carried out to the InVEST model in order to determine the variability of the model response when the values of three of its main coefficients: Z (seasonal precipitation distribution), prec (annual precipitation) and eto (annual evapotranspiration), change. The SA technique used here is a One-At-a-Time (OAT) screening method known as Morris method, applied over each one of the one hundred and fifty four sub-watersheds in which the Llobregat River basin is divided. As a result, this method provides three sensitivity indices for each one of the sub-watersheds under consideration, which are mapped to study how they are spatially distributed. From their analysis, the study shows that, in the case under consideration and between the limits considered for each factor, the effect of the Z coefficient on the model response is negligible, while the other two need to be accurately determined in order to obtain precise output variables. The results of this study will be applicable to the others watersheds assessed in the Consolider Scarce Project. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Process-based modelling of NH3 exchange with grazed grasslands

    NASA Astrophysics Data System (ADS)

    Móring, Andrea; Vieno, Massimo; Doherty, Ruth M.; Milford, Celia; Nemitz, Eiko; Twigg, Marsailidh M.; Horváth, László; Sutton, Mark A.

    2017-09-01

    In this study the GAG model, a process-based ammonia (NH3) emission model for urine patches, was extended and applied for the field scale. The new model (GAG_field) was tested over two modelling periods, for which micrometeorological NH3 flux data were available. Acknowledging uncertainties in the measurements, the model was able to simulate the main features of the observed fluxes. The temporal evolution of the simulated NH3 exchange flux was found to be dominated by NH3 emission from the urine patches, offset by simultaneous NH3 deposition to areas of the field not affected by urine. The simulations show how NH3 fluxes over a grazed field in a given day can be affected by urine patches deposited several days earlier, linked to the interaction of volatilization processes with soil pH dynamics. Sensitivity analysis showed that GAG_field was more sensitive to soil buffering capacity (β), field capacity (θfc) and permanent wilting point (θpwp) than the patch-scale model. The reason for these different sensitivities is dual. Firstly, the difference originates from the different scales. Secondly, the difference can be explained by the different initial soil pH and physical properties, which determine the maximum volume of urine that can be stored in the NH3 source layer. It was found that in the case of urine patches with a higher initial soil pH and higher initial soil water content, the sensitivity of NH3 exchange to β was stronger. Also, in the case of a higher initial soil water content, NH3 exchange was more sensitive to the changes in θfc and θpwp. The sensitivity analysis showed that the nitrogen content of urine (cN) is associated with high uncertainty in the simulated fluxes. However, model experiments based on cN values randomized from an estimated statistical distribution indicated that this uncertainty is considerably smaller in practice. Finally, GAG_field was tested with a constant soil pH of 7.5. The variation of NH3 fluxes simulated in this way showed a good agreement with those from the simulations with the original approach, accounting for a dynamically changing soil pH. These results suggest a way for model simplification when GAG_field is applied later at regional scale.

  16. A new open tubular capillary microextraction and sweeping for the analysis of super low concentration of hydrophobic compounds.

    PubMed

    Xia, Zhining; Gan, Tingting; Chen, Hua; Lv, Rui; Wei, Weili; Yang, Fengqing

    2010-10-01

    A sample pre-concentration method based on the in-line coupling of in-tube solid-phase microextraction and electrophoretic sweeping was developed for the analysis of hydrophobic compounds. The sample pre-concentration and electrophoretic separation processes were simply and sequentially carried out with a (35%-phenyl)-methylpolysiloxane-coated capillary. The developed method was validated and applied to enrich and separate several pharmaceuticals including loratadine, indomethacin, ibuprofen and doxazosin. Several parameters of microextration were investigated such as temperature, pH and eluant. And the concentration of microemulsion that influences separation efficiency and microextraction efficiency were also studied. Central composite design was applied for the optimization of sampling flow rate and sampling time that interact in a very complex way with each other. The precision, sensitivity and recovery of the method were investigated. Under the optimal conditions, the maximum enrichment factors for loratadine, indomethacin, ibuprofen and doxazosin in aqueous solutions are 1355, 571, 523 and 318, respectively. In addition, the developed method was applied to determine loratadine in rabbit blood sample.

  17. A Two-Dimensional Variational Analysis Method for NSCAT Ambiguity Removal: Methodology, Sensitivity, and Tuning

    NASA Technical Reports Server (NTRS)

    Hoffman, R. N.; Leidner, S. M.; Henderson, J. M.; Atlas, R.; Ardizzone, J. V.; Bloom, S. C.; Atlas, Robert (Technical Monitor)

    2001-01-01

    In this study, we apply a two-dimensional variational analysis method (2d-VAR) to select a wind solution from NASA Scatterometer (NSCAT) ambiguous winds. 2d-VAR determines a "best" gridded surface wind analysis by minimizing a cost function. The cost function measures the misfit to the observations, the background, and the filtering and dynamical constraints. The ambiguity closest in direction to the minimizing analysis is selected. 2d-VAR method, sensitivity and numerical behavior are described. 2d-VAR is compared to statistical interpolation (OI) by examining the response of both systems to a single ship observation and to a swath of unique scatterometer winds. 2d-VAR is used with both NSCAT ambiguities and NSCAT backscatter values. Results are roughly comparable. When the background field is poor, 2d-VAR ambiguity removal often selects low probability ambiguities. To avoid this behavior, an initial 2d-VAR analysis, using only the two most likely ambiguities, provides the first guess for an analysis using all the ambiguities or the backscatter data. 2d-VAR and median filter selected ambiguities usually agree. Both methods require horizontal consistency, so disagreements occur in clumps, or as linear features. In these cases, 2d-VAR ambiguities are often more meteorologically reasonable and more consistent with satellite imagery.

  18. Application of fractal and grey level co-occurrence matrix analysis in evaluation of brain corpus callosum and cingulum architecture.

    PubMed

    Pantic, Igor; Dacic, Sanja; Brkic, Predrag; Lavrnja, Irena; Pantic, Senka; Jovanovic, Tomislav; Pekovic, Sanja

    2014-10-01

    This aim of this study was to assess the discriminatory value of fractal and grey level co-occurrence matrix (GLCM) analysis methods in standard microscopy analysis of two histologically similar brain white mass regions that have different nerve fiber orientation. A total of 160 digital micrographs of thionine-stained rat brain white mass were acquired using a Pro-MicroScan DEM-200 instrument. Eighty micrographs from the anterior corpus callosum and eighty from the anterior cingulum areas of the brain were analyzed. The micrographs were evaluated using the National Institutes of Health ImageJ software and its plugins. For each micrograph, seven parameters were calculated: angular second moment, inverse difference moment, GLCM contrast, GLCM correlation, GLCM variance, fractal dimension, and lacunarity. Using the Receiver operating characteristic analysis, the highest discriminatory value was determined for inverse difference moment (IDM) (area under the receiver operating characteristic (ROC) curve equaled 0.925, and for the criterion IDM≤0.610 the sensitivity and specificity were 82.5 and 87.5%, respectively). Most of the other parameters also showed good sensitivity and specificity. The results indicate that GLCM and fractal analysis methods, when applied together in brain histology analysis, are highly capable of discriminating white mass structures that have different axonal orientation.

  19. Electric-field and strain-tunable electronic properties of MoS2/h-BN/graphene vertical heterostructures.

    PubMed

    Zan, Wenyan; Geng, Wei; Liu, Huanxiang; Yao, Xiaojun

    2016-01-28

    Vertical heterostructures of MoS2/h-BN/graphene have been successfully fabricated in recent experiments. Using first-principles analysis, we show that the structural and electronic properties of such vertical heterostructures are sensitive to applied vertical electric fields and strain. The applied electric field not only enhances the interlayer coupling but also linearly controls the charge transfer between graphene and MoS2 layers, leading to a tunable doping in graphene and controllable Schottky barrier height. Applied biaxial strain could weaken the interlayer coupling and results in a slight shift of graphene's Dirac point with respect to the Fermi level. It is of practical importance that the tunable electronic properties by strain and electric fields are immune to the presence of sulfur vacancies, the most common defect in MoS2.

  20. Evaluation of fire weather forecasts using PM2.5 sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Balachandran, Sivaraman; Baumann, Karsten; Pachon, Jorge E.; Mulholland, James A.; Russell, Armistead G.

    2017-01-01

    Fire weather forecasts are used by land and wildlife managers to determine when meteorological and fuel conditions are suitable to conduct prescribed burning. In this work, we investigate the sensitivity of ambient PM2.5 to various fire and meteorological variables in a spatial setting that is typical for the southeastern US, where prescribed fires are the single largest source of fine particulate matter. We use the method of principle components regression to estimate sensitivity of PM2.5, measured at a monitoring site in Jacksonville, NC (JVL), to fire data and observed and forecast meteorological variables. Fire data were gathered from prescribed fire activity used for ecological management at Marine Corps Base Camp Lejeune, extending 10-50 km south from the PM2.5 monitor. Principal components analysis (PCA) was run on 10 data sets that included acres of prescribed burning activity (PB) along with meteorological forecast data alone or in combination with observations. For each data set, observed PM2.5 (unitless) was regressed against PCA scores from the first seven principal components (explaining at least 80% of total variance). PM2.5 showed significant sensitivity to PB: 3.6 ± 2.2 μg m-3 per 1000 acres burned at the investigated distance scale of ∼10-50 km. Applying this sensitivity to the available activity data revealed a prescribed burning source contribution to measured PM2.5 of up to 25% on a given day. PM2.5 showed a positive sensitivity to relative humidity and temperature, and was also sensitive to wind direction, indicating the capture of more regional aerosol processing and transport effects. As expected, PM2.5 had a negative sensitivity to dispersive variables but only showed a statistically significant negative sensitivity to ventilation rate, highlighting the importance of this parameter to fire managers. A positive sensitivity to forecast precipitation was found, consistent with the practice of conducting prescribed burning on days when rain can naturally extinguish fires. Perhaps most importantly for land managers, our analysis suggests that instead of relying on the forecasts from a day before, prescribed burning decisions should be based on the forecasts released the morning of the burn when possible, since these data were more stable and yielded more statistically robust results.

  1. A meta-analysis of confocal laser endomicroscopy for the detection of neoplasia in patients with Barrett's esophagus.

    PubMed

    Xiong, Yi-Quan; Ma, Shu-Juan; Zhou, Jun-Hua; Zhong, Xue-Shan; Chen, Qing

    2016-06-01

    Barrett's esophagus (BE) is considered the most important risk factor for development of esophageal adenocarcinoma. Confocal laser endomicroscopy (CLE) is a recently developed technique used to diagnose neoplasia in BE. This meta-analysis was performed to assess the accuracy of CLE for diagnosis of neoplasia in BE. We searched EMBASE, PubMed, Cochrane Library, and Web of Science to identify relevant studies for all articles published up to June 27, 2015 in English. The quality of included studies was assessed using QUADAS-2. Per-patient and per-lesion pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio with 95% confidence intervals (CIs) were calculated. In total, 14 studies were included in the final analysis, covering 789 patients with 4047 lesions. Seven studies were included in the per-patient analysis. Pooled sensitivity and specificity were 89% (95% CI: 0.82-0.94) and 83% (95% CI: 0.78-0.86), respectively. Ten studies were included in the per-lesion analysis. Compared with the PP analysis, the corresponding pooled sensitivity declined to 77% (95% CI: 0.73-0.81) and specificity increased to 89% (95% CI: 0.87-0.90). Subgroup analysis showed that probe-based CLE (pCLE) was superior to endoscope-based CLE (eCLE) in pooled specificity [91.4% (95% CI: 89.7-92.9) vs 86.1% (95% CI: 84.3-87.8)] and AUC for the sROC (0.885 vs 0.762). Confocal laser endomicroscopy is a valid method to accurately differentiate neoplasms from non-neoplasms in BE. It can be applied to BE surveillance and early diagnosis of esophageal adenocarcinoma. © 2015 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.

  2. A general method for handling missing binary outcome data in randomized controlled trials.

    PubMed

    Jackson, Dan; White, Ian R; Mason, Dan; Sutton, Stephen

    2014-12-01

    The analysis of randomized controlled trials with incomplete binary outcome data is challenging. We develop a general method for exploring the impact of missing data in such trials, with a focus on abstinence outcomes. We propose a sensitivity analysis where standard analyses, which could include 'missing = smoking' and 'last observation carried forward', are embedded in a wider class of models. We apply our general method to data from two smoking cessation trials. A total of 489 and 1758 participants from two smoking cessation trials. The abstinence outcomes were obtained using telephone interviews. The estimated intervention effects from both trials depend on the sensitivity parameters used. The findings differ considerably in magnitude and statistical significance under quite extreme assumptions about the missing data, but are reasonably consistent under more moderate assumptions. A new method for undertaking sensitivity analyses when handling missing data in trials with binary outcomes allows a wide range of assumptions about the missing data to be assessed. In two smoking cessation trials the results were insensitive to all but extreme assumptions. © 2014 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.

  3. The Relationship of Mean Platelet Volume/Platelet Distribution Width and Duodenal Ulcer Perforation.

    PubMed

    Fan, Zhe; Zhuang, Chengjun

    2017-03-01

    Duodenal ulcer perforation (DUP) is a severe acute abdominal disease. Mean platelet volume (MPV) and platelet distribution width (PDW) are two platelet parameters, participating in many inflammatory processes. This study aims to investigate the relation of MPV/PDW and DUP. A total of 165 patients were studied retrospectively, including 21 females and 144 males. The study included two groups: 87 normal patients (control group) and 78 duodenal ulcer perforation patients (DUP group). Routine blood parameters were collected for analysis including white blood cell count (WBC), neutrophil ratio (NR), platelet count (PLT), MPV and PDW. Receiver operating curve (ROC) analysis was applied to evaluate the parameters' sensitivity. No significant differences were observed between the control group and DUP group in age and gender. WBC, NR and PDW were significantly increased in the DUP group ( P <0.001, respectively); PLT and MPV were significantly decreased in the DUP group ( P <0.001, respectively) compared to controls. MPV had the high sensitivity. Our results suggested a potential association between MPV/PDW and disease activity in DUP patients, and high sensitivity of MPV. © 2017 by the Association of Clinical Scientists, Inc.

  4. Flexible nanopillar-based electrochemical sensors for genetic detection of foodborne pathogens

    NASA Astrophysics Data System (ADS)

    Park, Yoo Min; Lim, Sun Young; Jeong, Soon Woo; Song, Younseong; Bae, Nam Ho; Hong, Seok Bok; Choi, Bong Gill; Lee, Seok Jae; Lee, Kyoung G.

    2018-06-01

    Flexible and highly ordered nanopillar arrayed electrodes have brought great interest for many electrochemical applications, especially to the biosensors, because of its unique mechanical and topological properties. Herein, we report an advanced method to fabricate highly ordered nanopillar electrodes produced by soft-/photo-lithography and metal evaporation. The highly ordered nanopillar array exhibited the superior electrochemical and mechanical properties in regard with the wide space to response with electrolytes, enabling the sensitive analysis. As-prepared gold and silver electrodes on nanopillar arrays exhibit great and stable electrochemical performance to detect the amplified gene from foodborne pathogen of Escherichia coli O157:H7. Additionally, lightweight, flexible, and USB-connectable nanopillar-based electrochemical sensor platform improves the connectivity, portability, and sensitivity. Moreover, we successfully confirm the performance of genetic analysis using real food, specially designed intercalator, and amplified gene from foodborne pathogens with high reproducibility (6% standard deviation) and sensitivity (10 × 1.01 CFU) within 25 s based on the square wave voltammetry principle. This study confirmed excellent mechanical and chemical characteristics of nanopillar electrodes have a great and considerable electrochemical activity to apply as genetic biosensor platform in the fields of point-of-care testing (POCT).

  5. Extension of the ADjoint Approach to a Laminar Navier-Stokes Solver

    NASA Astrophysics Data System (ADS)

    Paige, Cody

    The use of adjoint methods is common in computational fluid dynamics to reduce the cost of the sensitivity analysis in an optimization cycle. The forward mode ADjoint is a combination of an adjoint sensitivity analysis method with a forward mode automatic differentiation (AD) and is a modification of the reverse mode ADjoint method proposed by Mader et al.[1]. A colouring acceleration technique is presented to reduce the computational cost increase associated with forward mode AD. The forward mode AD facilitates the implementation of the laminar Navier-Stokes (NS) equations. The forward mode ADjoint method is applied to a three-dimensional computational fluid dynamics solver. The resulting Euler and viscous ADjoint sensitivities are compared to the reverse mode Euler ADjoint derivatives and a complex-step method to demonstrate the reduced computational cost and accuracy. Both comparisons demonstrate the benefits of the colouring method and the practicality of using a forward mode AD. [1] Mader, C.A., Martins, J.R.R.A., Alonso, J.J., and van der Weide, E. (2008) ADjoint: An approach for the rapid development of discrete adjoint solvers. AIAA Journal, 46(4):863-873. doi:10.2514/1.29123.

  6. Meta-epidemiologic study showed frequent time trends in summary estimates from meta-analyses of diagnostic accuracy studies.

    PubMed

    Cohen, Jérémie F; Korevaar, Daniël A; Wang, Junfeng; Leeflang, Mariska M; Bossuyt, Patrick M

    2016-09-01

    To evaluate changes over time in summary estimates from meta-analyses of diagnostic accuracy studies. We included 48 meta-analyses from 35 MEDLINE-indexed systematic reviews published between September 2011 and January 2012 (743 diagnostic accuracy studies; 344,015 participants). Within each meta-analysis, we ranked studies by publication date. We applied random-effects cumulative meta-analysis to follow how summary estimates of sensitivity and specificity evolved over time. Time trends were assessed by fitting a weighted linear regression model of the summary accuracy estimate against rank of publication. The median of the 48 slopes was -0.02 (-0.08 to 0.03) for sensitivity and -0.01 (-0.03 to 0.03) for specificity. Twelve of 96 (12.5%) time trends in sensitivity or specificity were statistically significant. We found a significant time trend in at least one accuracy measure for 11 of the 48 (23%) meta-analyses. Time trends in summary estimates are relatively frequent in meta-analyses of diagnostic accuracy studies. Results from early meta-analyses of diagnostic accuracy studies should be considered with caution. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Ultra-high resolution, polarization sensitive transversal optical coherence tomography for structural analysis and strain mapping

    NASA Astrophysics Data System (ADS)

    Wiesauer, Karin; Pircher, Michael; Goetzinger, Erich; Hitzenberger, Christoph K.; Engelke, Rainer; Ahrens, Gisela; Pfeiffer, Karl; Ostrzinski, Ute; Gruetzner, Gabi; Oster, Reinhold; Stifter, David

    2006-02-01

    Optical coherence tomography (OCT) is a contactless and non-invasive technique nearly exclusively applied for bio-medical imaging of tissues. Besides the internal structure, additionally strains within the sample can be mapped when OCT is performed in a polarization sensitive (PS) way. In this work, we demonstrate the benefits of PS-OCT imaging for non-biological applications. We have developed the OCT technique beyond the state-of-the-art: based on transversal ultra-high resolution (UHR-)OCT, where an axial resolution below 2 μm within materials is obtained using a femtosecond laser as light source, we have modified the setup for polarization sensitive measurements (transversal UHR-PS-OCT). We perform structural analysis and strain mapping for different types of samples: for a highly strained elastomer specimen we demonstrate the necessity of UHR-imaging. Furthermore, we investigate epoxy waveguide structures, photoresist moulds for the fabrication of micro-electromechanical parts (MEMS), and the glass-fibre composite outer shell of helicopter rotor blades where cracks are present. For these examples, transversal scanning UHR-PS-OCT is shown to provide important information about the structural properties and the strain distribution within the samples.

  8. A Preliminary Detection of Arcminute Scale Cosmic Microwave Background Anisotropy with the BIMA Array

    NASA Technical Reports Server (NTRS)

    Dawson, K. S.; Holzapfel, W. L.; Carlstrom, J. E.; Joy, M.; LaRoque, S. J.; Reese, E. D.; Rose, M. Franklin (Technical Monitor)

    2001-01-01

    We have used the Berkeley-Illinois-Maryland-Association (BIMA) array outfitted with sensitive cm-wave receivers to expand our search for minute scale anisotropy of the Cosmic Microwave Background (CMB). The interferometer was placed in a compact configuration to obtain high brightness sensitivity on arcminute scales over its 6.6' FWHM field of view. The sensitivity of this experiment to flat band power peaks at a multipole of 1 = 5530 which corresponds to an angular scale of -2'. We present the analysis of a total of 470 hours of on-source integration time on eleven independent fields which were selected based on their low IR contrast and lack of bright radio sources. Applying a Bayesian analysis to the visibility data, we find CMB anisotropy flat band power Q_flat = 6.1(+2.8/-4.8) microKelvin at 68% confidence. The confidence of a nonzero signal is 76% and we find an upper limit of Q_flat < 12.4 microKelvin at 95% confidence. We have supplemented our BIMA observations with concurrent observations at 4.8 GHz with the VLA to search for and remove point sources. We find the point sources make an insignificant contribution to the observed anisotropy.

  9. An alternative respiratory sounds classification system utilizing artificial neural networks.

    PubMed

    Oweis, Rami J; Abdulhay, Enas W; Khayal, Amer; Awad, Areen

    2015-01-01

    Computerized lung sound analysis involves recording lung sound via an electronic device, followed by computer analysis and classification based on specific signal characteristics as non-linearity and nonstationarity caused by air turbulence. An automatic analysis is necessary to avoid dependence on expert skills. This work revolves around exploiting autocorrelation in the feature extraction stage. All process stages were implemented in MATLAB. The classification process was performed comparatively using both artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFIS) toolboxes. The methods have been applied to 10 different respiratory sounds for classification. The ANN was superior to the ANFIS system and returned superior performance parameters. Its accuracy, specificity, and sensitivity were 98.6%, 100%, and 97.8%, respectively. The obtained parameters showed superiority to many recent approaches. The promising proposed method is an efficient fast tool for the intended purpose as manifested in the performance parameters, specifically, accuracy, specificity, and sensitivity. Furthermore, it may be added that utilizing the autocorrelation function in the feature extraction in such applications results in enhanced performance and avoids undesired computation complexities compared to other techniques.

  10. Application of the High Resolution Melting analysis for genetic mapping of Sequence Tagged Site markers in narrow-leafed lupin (Lupinus angustifolius L.).

    PubMed

    Kamel, Katarzyna A; Kroc, Magdalena; Święcicki, Wojciech

    2015-01-01

    Sequence tagged site (STS) markers are valuable tools for genetic and physical mapping that can be successfully used in comparative analyses among related species. Current challenges for molecular markers genotyping in plants include the lack of fast, sensitive and inexpensive methods suitable for sequence variant detection. In contrast, high resolution melting (HRM) is a simple and high-throughput assay, which has been widely applied in sequence polymorphism identification as well as in the studies of genetic variability and genotyping. The present study is the first attempt to use the HRM analysis to genotype STS markers in narrow-leafed lupin (Lupinus angustifolius L.). The sensitivity and utility of this method was confirmed by the sequence polymorphism detection based on melting curve profiles in the parental genotypes and progeny of the narrow-leafed lupin mapping population. Application of different approaches, including amplicon size and a simulated heterozygote analysis, has allowed for successful genetic mapping of 16 new STS markers in the narrow-leafed lupin genome.

  11. Measuring and explaining eco-efficiencies of wastewater treatment plants in China: An uncertainty analysis perspective.

    PubMed

    Dong, Xin; Zhang, Xinyi; Zeng, Siyu

    2017-04-01

    In the context of sustainable development, there has been an increasing requirement for an eco-efficiency assessment of wastewater treatment plants (WWTPs). Data envelopment analysis (DEA), a technique that is widely applied for relative efficiency assessment, is used in combination with the tolerances approach to handle WWTPs' multiple inputs and outputs as well as their uncertainty. The economic cost, energy consumption, contaminant removal, and global warming effect during the treatment processes are integrated to interpret the eco-efficiency of WWTPs. A total of 736 sample plants from across China are assessed, and large sensitivities to variations in inputs and outputs are observed for most samples, with only three WWTPs identified as being stably efficient. Size of plant, overcapacity, climate type, and influent characteristics are proven to have a significant influence on both the mean efficiency and performance sensitivity of WWTPs, while no clear relationships were found between eco-efficiency and technology under the framework of uncertainty analysis. The incorporation of uncertainty quantification and environmental impact consideration has improved the liability and applicability of the assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Analysis of Serum Total and Free PSA Using Immunoaffinity Depletion Coupled to SRM: Correlation with Clinical Immunoassay Tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Tao; Hossain, Mahmud; Schepmoes, Athena A.

    2012-08-03

    Sandwich immunoassay is the standard technique used in clinical labs for quantifying protein biomarkers for disease detection, monitoring and therapeutic intervention. Albeit highly sensitive, the development of a specific immunoassay is rather time-consuming and associated with extremely high cost due to the requirement for paired immunoaffinity reagents of high specificity. Recently, mass spectrometry-based methods, specifically selected reaction monitoring mass spectrometry (SRM-MS), have been increasingly applied to measure low abundance biomarker candidates in tissue and biofluids, owing to high sensitivity and specificity, simplicity of assay configuration, and great multiplexing capability. In this study, we report for the first time the developmentmore » of immunoaffinity depletion-based workflows and SRM-MS assays that enable sensitive and accurate quantification of total and free prostate-specific antigen (PSA) in serum without the requirement for specific PSA antibodies. With stable isotope dilution and external calibration, low ng/mL level detection of both total and free PSA was consistently achieved in both PSA-spiked female serum samples and actual patient serum samples. Moreover, comparison of the results obtained when SRM PSA assays and conventional immunoassays were applied to the same samples showed very good correlation (R2 values ranging from 0.90 to 0.99) in several independent clinical serum sample sets, including a set of 33 samples assayed in a blinded test. These results demonstrate that the workflows and SRM assays developed here provide an attractive alternative for reliably measuring total and free PSA in human blood. Furthermore, simultaneous measurement of free and total PSA and many other biomarkers can be performed in a single analysis using high-resolution liquid chromatographic separation coupled with SRM-MS.« less

  13. Quantification of Kryptofix 2.2.2 in [18F]fluorine-labelled radiopharmaceuticals by rapid-resolution liquid chromatography.

    PubMed

    Lao, Yexing; Yang, Cuiping; Zou, Wei; Gan, Manquan; Chen, Ping; Su, Weiwei

    2012-05-01

    The cryptand Kryptofix 2.2.2 is used extensively as a phase-transfer reagent in the preparation of [18F]fluoride-labelled radiopharmaceuticals. However, it has considerable acute toxicity. The aim of this study was to develop and validate a method for rapid (within 1 min), specific and sensitive quantification of Kryptofix 2.2.2 at trace levels. Chromatographic separations were carried out by rapid-resolution liquid chromatography (Agilent ZORBAX SB-C18 rapid-resolution column, 2.1 × 30 mm, 3.5 μm). Tandem mass spectra were acquired using a triple quadrupole mass spectrometer equipped with an electrospray ionization interface. Quantitative mass spectrometric analysis was conducted in positive ion mode and multiple reaction monitoring mode for the m/z 377.3 → 114.1 transition for Kryptofix 2.2.2. The external standard method was used for quantification. The method met the precision and efficiency requirements for PET radiopharmaceuticals, providing satisfactory results for specificity, matrix effect, stability, linearity (0.5-100 ng/ml, r(2)=0.9975), precision (coefficient of variation < 5%), accuracy (relative error < ± 3%), sensitivity (lower limit of quantification=0.5 ng) and detection time (<1 min). Fluorodeoxyglucose (n=6) was analysed, and the Kryptofix 2.2.2 content was found to be well below the maximum permissible levels approved by the US Food and Drug Administration. The developed method has a short analysis time (<1 min) and high sensitivity (lower limit of quantification=0.5 ng/ml) and can be successfully applied to rapid quantification of Kryptofix 2.2.2 at trace levels in fluorodeoxyglucose. This method could also be applied to other [18F]fluorine-labelled radiopharmaceuticals that use Kryptofix 2.2.2 as a phase-transfer reagent.

  14. Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) for a 3-D Flexible Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, Clyde R.; Hou, Gene J.-W.

    2001-01-01

    The formulation and implementation of an optimization method called Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) are extended from single discipline analysis (aerodynamics only) to multidisciplinary analysis - in this case, static aero-structural analysis - and applied to a simple 3-D wing problem. The method aims to reduce the computational expense incurred in performing shape optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, Finite Element Method (FEM) structural analysis and sensitivity analysis tools. Results for this small problem show that the method reaches the same local optimum as conventional optimization. However, unlike its application to the win,, (single discipline analysis), the method. as I implemented here, may not show significant reduction in the computational cost. Similar reductions were seen in the two-design-variable (DV) problem results but not in the 8-DV results given here.

  15. Nonlinear Acoustic and Ultrasonic NDT of Aeronautical Components

    NASA Astrophysics Data System (ADS)

    Van Den Abeele, Koen; Katkowski, Tomasz; Mattei, Christophe

    2006-05-01

    In response to the demand for innovative microdamage inspection systems, with high sensitivity and undoubted accuracy, we are currently investigating the use and robustness of several acoustic and ultrasonic NDT techniques based on Nonlinear Elastic Wave Spectroscopy (NEWS) for the characterization of microdamage in aeronautical components. In this report, we illustrate the results of an amplitude dependent analysis of the resonance behaviour, both in time (signal reverberation) and in frequency (sweep) domain. The technique is applied to intact and damaged samples of Carbon Fiber Reinforced Plastics (CFRP) composites after thermal loading or mechanical fatigue. The method shows a considerable gain in sensitivity and an incontestable interpretation of the results for nonlinear signatures in comparison with the linear characteristics. For highly fatigued samples, slow dynamical effects are observed.

  16. Stacked graphene nanofibers for electrochemical oxidation of DNA bases.

    PubMed

    Ambrosi, Adriano; Pumera, Martin

    2010-08-21

    In this article, we show that stacked graphene nanofibers (SGNFs) demonstrate superior electrochemical performance for oxidation of DNA bases over carbon nanotubes (CNTs). This is due to an exceptionally high number of accessible graphene sheet edges on the surface of the nanofibers when compared to carbon nanotubes, as shown by transmission electron microscopy and Raman spectroscopy. The oxidation signals of adenine, guanine, cytosine, and thymine exhibit two to four times higher currents than on CNT-based electrodes. SGNFs also exhibit higher sensitivity than do edge-plane pyrolytic graphite, glassy carbon, or graphite microparticle-based electrodes. We also demonstrate that influenza A(H1N1)-related strands can be sensitively oxidized on SGNF-based electrodes, which could therefore be applied to label-free DNA analysis.

  17. A technique for using radio jets as extended gravitational lensing probes

    NASA Technical Reports Server (NTRS)

    Kronberg, Philipp P.; Dyer, Charles C.; Burbidge, E. Margaret; Junkkarinen, Vesa T.

    1991-01-01

    A new and potentially powerful method of measuring the mass of a galaxy (or dark matter concentration) which lies close in position to a background polarized radio jet is proposed. Using the fact that the polarization angle is not changed by lensing, an 'alignment-breaking parameter' is defined which is a sensitive indicator of gravitational distortion. The method remains sensitive over a wide redshift range of the gravitational lens. This technique is applied to the analysis of polarimetric observations of the jet of 3C 9 at z = 2.012, combined with a newly discovered 20.3 mag foreground galaxy at z = 0.2538 to 'weigh' the galaxy and obtain an approximate upper limit to the mass-to-light ratio.

  18. A Highly Sensitive Assay Using Synthetic Blood Containing Test Microbes for Evaluation of the Penetration Resistance of Protective Clothing Material under Applied Pressure.

    PubMed

    Shimasaki, Noriko; Hara, Masayuki; Kikuno, Ritsuko; Shinohara, Katsuaki

    2016-01-01

    To prevent nosocomial infections caused by even either Ebola virus or methicillin-resistant Staphylococcus aureus (MRSA), healthcare workers must wear the appropriate protective clothing which can inhibit contact transmission of these pathogens. Therefore, it is necessary to evaluate the performance of protective clothing for penetration resistance against infectious agents. In Japan, some standard methods were established to evaluate the penetration resistance of protective clothing fabric materials under applied pressure. However, these methods only roughly classified the penetration resistance of fabrics, and the detection sensitivity of the methods and the penetration amount with respect to the relationship between blood and the pathogen have not been studied in detail. Moreover, no standard method using bacteria for evaluation is known. Here, to evaluate penetration resistance of protective clothing materials under applied pressure, the detection sensitivity and the leak amount were investigated by using synthetic blood containing bacteriophage phi-X174 or S. aureus. And the volume of leaked synthetic blood and the amount of test microbe penetration were simultaneously quantified. Our results showed that the penetration detection sensitivity achieved using a test microbial culture was higher than that achieved using synthetic blood at invisible leak level pressures. This finding suggested that there is a potential risk of pathogen penetration even when visual leak of contaminated blood through the protective clothing was not observed. Moreover, at visible leak level pressures, it was found that the amount of test microbe penetration varied at least ten-fold among protective clothing materials classified into the same class of penetration resistance. Analysis of the penetration amount revealed a significant correlation between the volume of penetrated synthetic blood and the amount of test microbe penetration, indicating that the leaked volume of synthetic blood could be considered as a latent indicator for infection risk, that the amount of exposure to contaminated blood corresponds to the risk of infection. Our study helped us ascertain, with high sensitivity, the differences among fabric materials with respect to their protective performance, which may facilitate effective selection of protective clothing depending on the risk assessment.

  19. The Contribution of Particle Swarm Optimization to Three-Dimensional Slope Stability Analysis

    PubMed Central

    A Rashid, Ahmad Safuan; Ali, Nazri

    2014-01-01

    Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes. PMID:24991652

  20. The contribution of particle swarm optimization to three-dimensional slope stability analysis.

    PubMed

    Kalatehjari, Roohollah; Rashid, Ahmad Safuan A; Ali, Nazri; Hajihassani, Mohsen

    2014-01-01

    Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes.

  1. Fast Detection of Copper Content in Rice by Laser-Induced Breakdown Spectroscopy with Uni- and Multivariate Analysis.

    PubMed

    Liu, Fei; Ye, Lanhan; Peng, Jiyu; Song, Kunlin; Shen, Tingting; Zhang, Chu; He, Yong

    2018-02-27

    Fast detection of heavy metals is very important for ensuring the quality and safety of crops. Laser-induced breakdown spectroscopy (LIBS), coupled with uni- and multivariate analysis, was applied for quantitative analysis of copper in three kinds of rice (Jiangsu rice, regular rice, and Simiao rice). For univariate analysis, three pre-processing methods were applied to reduce fluctuations, including background normalization, the internal standard method, and the standard normal variate (SNV). Linear regression models showed a strong correlation between spectral intensity and Cu content, with an R 2 more than 0.97. The limit of detection (LOD) was around 5 ppm, lower than the tolerance limit of copper in foods. For multivariate analysis, partial least squares regression (PLSR) showed its advantage in extracting effective information for prediction, and its sensitivity reached 1.95 ppm, while support vector machine regression (SVMR) performed better in both calibration and prediction sets, where R c 2 and R p 2 reached 0.9979 and 0.9879, respectively. This study showed that LIBS could be considered as a constructive tool for the quantification of copper contamination in rice.

  2. Fast Detection of Copper Content in Rice by Laser-Induced Breakdown Spectroscopy with Uni- and Multivariate Analysis

    PubMed Central

    Ye, Lanhan; Song, Kunlin; Shen, Tingting

    2018-01-01

    Fast detection of heavy metals is very important for ensuring the quality and safety of crops. Laser-induced breakdown spectroscopy (LIBS), coupled with uni- and multivariate analysis, was applied for quantitative analysis of copper in three kinds of rice (Jiangsu rice, regular rice, and Simiao rice). For univariate analysis, three pre-processing methods were applied to reduce fluctuations, including background normalization, the internal standard method, and the standard normal variate (SNV). Linear regression models showed a strong correlation between spectral intensity and Cu content, with an R2 more than 0.97. The limit of detection (LOD) was around 5 ppm, lower than the tolerance limit of copper in foods. For multivariate analysis, partial least squares regression (PLSR) showed its advantage in extracting effective information for prediction, and its sensitivity reached 1.95 ppm, while support vector machine regression (SVMR) performed better in both calibration and prediction sets, where Rc2 and Rp2 reached 0.9979 and 0.9879, respectively. This study showed that LIBS could be considered as a constructive tool for the quantification of copper contamination in rice. PMID:29495445

  3. Phase-locked and non-phase-locked EEG responses to pinprick stimulation before and after experimentally-induced secondary hyperalgesia.

    PubMed

    van den Broeke, Emanuel N; de Vries, Bart; Lambert, Julien; Torta, Diana M; Mouraux, André

    2017-08-01

    Pinprick-evoked brain potentials (PEPs) have been proposed as a technique to investigate secondary hyperalgesia and central sensitization in humans. However, the signal-to-noise (SNR) of PEPs is low. Here, using time-frequency analysis, we characterize the phase-locked and non-phase-locked EEG responses to pinprick stimulation, before and after secondary hyperalgesia. Secondary hyperalgesia was induced using high-frequency electrical stimulation (HFS) of the left/right forearm skin in 16 volunteers. EEG responses to 64 and 96mN pinprick stimuli were elicited from both arms, before and 20min after HFS. Pinprick stimulation applied to normal skin elicited a phase-locked low-frequency (<5Hz) response followed by a reduction of alpha-band oscillations (7-10Hz). The low-frequency response was significantly increased when pinprick stimuli were delivered to the area of secondary hyperalgesia. There was no change in the reduction of alpha-band oscillations. Whereas the low-frequency response was enhanced for both 64 and 96mN intensities, PEPs analyzed in the time domain were only significantly enhanced for the 64mN intensity. Time-frequency analysis may be more sensitive than conventional time-domain analysis in revealing EEG changes associated to secondary hyperalgesia. Time-frequency analysis of PEPs can be used to investigate central sensitization in humans. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  4. Uncertainty Reduction using Bayesian Inference and Sensitivity Analysis: A Sequential Approach to the NASA Langley Uncertainty Quantification Challenge

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar

    2016-01-01

    This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.

  5. The use of laser-induced fluorescence or ultraviolet detectors for sensitive and selective analysis of tobramycin or erythropoietin in complex samples

    NASA Astrophysics Data System (ADS)

    Ahmed, Hytham M.; Ebeid, Wael B.

    2015-05-01

    Complex samples analysis is a challenge in pharmaceutical and biopharmaceutical analysis. In this work, tobramycin (TOB) analysis in human urine samples and recombinant human erythropoietin (rhEPO) analysis in the presence of similar protein were selected as representative examples of such samples analysis. Assays of TOB in urine samples are difficult because of poor detectability. Therefore laser induced fluorescence detector (LIF) was combined with a separation technique, micellar electrokinetic chromatography (MEKC), to determine TOB through derivatization with fluorescein isothiocyanate (FITC). Borate was used as background electrolyte (BGE) with negative-charged mixed micelles as additive. The method was successively applied to urine samples. The LOD and LOQ for Tobramycin in urine were 90 and 200 ng/ml respectively and recovery was >98% (n = 5). All urine samples were analyzed by direct injection without sample pre-treatment. Another use of hyphenated analytical technique, capillary zone electrophoresis (CZE) connected to ultraviolet (UV) detector was also used for sensitive analysis of rhEPO at low levels (2000 IU) in the presence of large amount of human serum albumin (HSA). Analysis of rhEPO was achieved by the use of the electrokinetic injection (EI) with discontinuous buffers. Phosphate buffer was used as BGE with metal ions as additive. The proposed method can be used for the estimation of large number of quality control rhEPO samples in a short period.

  6. A systems approach for the development of a sustainable community--the application of the sensitivity model (SM).

    PubMed

    Chan, Shih-Liang; Huang, Shu-Li

    2004-09-01

    Corresponding to the concept of 'Think globally, act locally and plan regionally' of sustainable development, this paper discusses the approach of planning a sustainable community in terms of systems thinking. We apply a systems tool, the sensitivity model (SM), to build a model of the development of the community of Ping-Ding, located adjacent to the Yang-Ming-Shan National Park, Taiwan. The major issue in the development of Ping-Ding is the conflict between environmental conservation and the development of a local tourism industry. With the involvement of local residents, planners, and interest groups, a system model of 26 variables was defined to identify characteristics of Ping-Ding through pattern recognition. Two scenarios concerning the sustainable development of Ping-Ding are simulated with interlinked feedbacks from variables. The results of the analysis indicate that the development of Ping-Ding would be better served by the planning of agriculture and the tourism industry. The advantages and shortfalls of applying SM in the current planning environment of Taiwan are also discussed to conclude this paper.

  7. An Integrated Framework for Multipollutant Air Quality Management and Its Application in Georgia

    NASA Astrophysics Data System (ADS)

    Cohan, Daniel S.; Boylan, James W.; Marmur, Amit; Khan, Maudood N.

    2007-10-01

    Air protection agencies in the United States increasingly confront non-attainment of air quality standards for multiple pollutants sharing interrelated emission origins. Traditional approaches to attainment planning face important limitations that are magnified in the multipollutant context. Recognizing those limitations, the Georgia Environmental Protection Division has adopted an integrated framework to address ozone, fine particulate matter, and regional haze in the state. Rather than applying atmospheric modeling merely as a final check of an overall strategy, photochemical sensitivity analysis is conducted upfront to compare the effectiveness of controlling various precursor emission species and source regions. Emerging software enables the modeling of health benefits and associated economic valuations resulting from air pollution control. Photochemical sensitivity and health benefits analyses, applied together with traditional cost and feasibility assessments, provide a more comprehensive characterization of the implications of various control options. The fuller characterization both informs the selection of control options and facilitates the communication of impacts to affected stakeholders and the public. Although the integrated framework represents a clear improvement over previous attainment-planning efforts, key remaining shortcomings are also discussed.

  8. Nonlinear Dynamical Analysis of Fibrillation

    NASA Astrophysics Data System (ADS)

    Kerin, John A.; Sporrer, Justin M.; Egolf, David A.

    2013-03-01

    The development of spatiotemporal chaotic behavior in heart tissue, termed fibrillation, is a devastating, life-threatening condition. The chaotic behavior of electrochemical signals, in the form of spiral waves, causes the muscles of the heart to contract in an incoherent manner, hindering the heart's ability to pump blood. We have applied the mathematical tools of nonlinear dynamics to large-scale simulations of a model of fibrillating heart tissue to uncover the dynamical modes driving this chaos. By studying the evolution of Lyapunov vectors and exponents over short times, we have found that the fibrillating tissue is sensitive to electrical perturbations only in narrow regions immediately in front of the leading edges of spiral waves, especially when these waves collide, break apart, or hit the edges of the tissue sample. Using this knowledge, we have applied small stimuli to areas of varying sensitivity. By studying the evolution of the effects of these perturbations, we have made progress toward controlling the electrochemical patterns associated with heart fibrillation. This work was supported by the U.S. National Science Foundation (DMR-0094178) and Research Corporation.

  9. HPLC and chemometrics-assisted UV-spectroscopy methods for the simultaneous determination of ambroxol and doxycycline in capsule

    NASA Astrophysics Data System (ADS)

    Hadad, Ghada M.; El-Gindy, Alaa; Mahmoud, Waleed M. M.

    2008-08-01

    High-performance liquid chromatography (HPLC) and multivariate spectrophotometric methods are described for the simultaneous determination of ambroxol hydrochloride (AM) and doxycycline (DX) in combined pharmaceutical capsules. The chromatographic separation was achieved on reversed-phase C 18 analytical column with a mobile phase consisting of a mixture of 20 mM potassium dihydrogen phosphate, pH 6-acetonitrile in ratio of (1:1, v/v) and UV detection at 245 nm. Also, the resolution has been accomplished by using numerical spectrophotometric methods as classical least squares (CLS), principal component regression (PCR) and partial least squares (PLS-1) applied to the UV spectra of the mixture and graphical spectrophotometric method as first derivative of the ratio spectra ( 1DD) method. Analytical figures of merit (FOM), such as sensitivity, selectivity, analytical sensitivity, limit of quantitation and limit of detection were determined for CLS, PLS-1 and PCR methods. The proposed methods were validated and successfully applied for the analysis of pharmaceutical formulation and laboratory-prepared mixtures containing the two component combination.

  10. HPLC and chemometrics-assisted UV-spectroscopy methods for the simultaneous determination of ambroxol and doxycycline in capsule.

    PubMed

    Hadad, Ghada M; El-Gindy, Alaa; Mahmoud, Waleed M M

    2008-08-01

    High-performance liquid chromatography (HPLC) and multivariate spectrophotometric methods are described for the simultaneous determination of ambroxol hydrochloride (AM) and doxycycline (DX) in combined pharmaceutical capsules. The chromatographic separation was achieved on reversed-phase C(18) analytical column with a mobile phase consisting of a mixture of 20mM potassium dihydrogen phosphate, pH 6-acetonitrile in ratio of (1:1, v/v) and UV detection at 245 nm. Also, the resolution has been accomplished by using numerical spectrophotometric methods as classical least squares (CLS), principal component regression (PCR) and partial least squares (PLS-1) applied to the UV spectra of the mixture and graphical spectrophotometric method as first derivative of the ratio spectra ((1)DD) method. Analytical figures of merit (FOM), such as sensitivity, selectivity, analytical sensitivity, limit of quantitation and limit of detection were determined for CLS, PLS-1 and PCR methods. The proposed methods were validated and successfully applied for the analysis of pharmaceutical formulation and laboratory-prepared mixtures containing the two component combination.

  11. Soil Moisture Content Estimation Based on Sentinel-1 and Auxiliary Earth Observation Products. A Hydrological Approach

    PubMed Central

    Alexakis, Dimitrios D.; Mexis, Filippos-Dimitrios K.; Vozinaki, Anthi-Eirini K.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.

    2017-01-01

    A methodology for elaborating multi-temporal Sentinel-1 and Landsat 8 satellite images for estimating topsoil Soil Moisture Content (SMC) to support hydrological simulation studies is proposed. After pre-processing the remote sensing data, backscattering coefficient, Normalized Difference Vegetation Index (NDVI), thermal infrared temperature and incidence angle parameters are assessed for their potential to infer ground measurements of SMC, collected at the top 5 cm. A non-linear approach using Artificial Neural Networks (ANNs) is tested. The methodology is applied in Western Crete, Greece, where a SMC gauge network was deployed during 2015. The performance of the proposed algorithm is evaluated using leave-one-out cross validation and sensitivity analysis. ANNs prove to be the most efficient in SMC estimation yielding R2 values between 0.7 and 0.9. The proposed methodology is used to support a hydrological simulation with the HEC-HMS model, applied at the Keramianos basin which is ungauged for SMC. Results and model sensitivity highlight the contribution of combining Sentinel-1 SAR and Landsat 8 images for improving SMC estimates and supporting hydrological studies. PMID:28635625

  12. A flowsheet model of a well-mixed fluidized bed dryer: Applications in controllability assessment and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langrish, T.A.G.; Harvey, A.C.

    2000-01-01

    A model of a well-mixed fluidized-bed dryer within a process flowsheeting package (SPEEDUP{trademark}) has been developed and applied to a parameter sensitivity study, a steady-state controllability analysis and an optimization study. This approach is more general and would be more easily applied to a complex flowsheet than one which relied on stand-alone dryer modeling packages. The simulation has shown that industrial data may be fitted to the model outputs with sensible values of unknown parameters. For this case study, the parameter sensitivity study has found that the heat loss from the dryer and the critical moisture content of the materialmore » have the greatest impact on the dryer operation at the current operating point. An optimization study has demonstrated the dominant effect of the heat loss from the dryer on the current operating cost and the current operating conditions, and substantial cost savings (around 50%) could be achieved with a well-insulated and airtight dryer, for the specific case studied here.« less

  13. Soil Moisture Content Estimation Based on Sentinel-1 and Auxiliary Earth Observation Products. A Hydrological Approach.

    PubMed

    Alexakis, Dimitrios D; Mexis, Filippos-Dimitrios K; Vozinaki, Anthi-Eirini K; Daliakopoulos, Ioannis N; Tsanis, Ioannis K

    2017-06-21

    A methodology for elaborating multi-temporal Sentinel-1 and Landsat 8 satellite images for estimating topsoil Soil Moisture Content (SMC) to support hydrological simulation studies is proposed. After pre-processing the remote sensing data, backscattering coefficient, Normalized Difference Vegetation Index (NDVI), thermal infrared temperature and incidence angle parameters are assessed for their potential to infer ground measurements of SMC, collected at the top 5 cm. A non-linear approach using Artificial Neural Networks (ANNs) is tested. The methodology is applied in Western Crete, Greece, where a SMC gauge network was deployed during 2015. The performance of the proposed algorithm is evaluated using leave-one-out cross validation and sensitivity analysis. ANNs prove to be the most efficient in SMC estimation yielding R² values between 0.7 and 0.9. The proposed methodology is used to support a hydrological simulation with the HEC-HMS model, applied at the Keramianos basin which is ungauged for SMC. Results and model sensitivity highlight the contribution of combining Sentinel-1 SAR and Landsat 8 images for improving SMC estimates and supporting hydrological studies.

  14. Quartz crystal microbalance (QCM) affinity biosensor for genetically modified organisms (GMOs) detection.

    PubMed

    Mannelli, Ilaria; Minunni, Maria; Tombelli, Sara; Mascini, Marco

    2003-03-01

    A DNA piezoelectric sensor has been developed for the detection of genetically modified organisms (GMOs). Single stranded DNA (ssDNA) probes were immobilised on the sensor surface of a quartz crystal microbalance (QCM) device and the hybridisation between the immobilised probe and the target complementary sequence in solution was monitored. The probe sequences were internal to the sequence of the 35S promoter (P) and Nos terminator (T), which are inserted sequences in the genome of GMOs regulating the transgene expression. Two different probe immobilisation procedures were applied: (a) a thiol-dextran procedure and (b) a thiol-derivatised probe and blocking thiol procedure. The system has been optimised using synthetic oligonucleotides, which were then applied to samples of plasmidic and genomic DNA isolated from the pBI121 plasmid, certified reference materials (CRM), and real samples amplified by the polymerase chain reaction (PCR). The analytical parameters of the sensor have been investigated (sensitivity, reproducibility, lifetime etc.). The results obtained showed that both immobilisation procedures enabled sensitive and specific detection of GMOs, providing a useful tool for screening analysis in food samples.

  15. Biodynamic imaging of therapeutic efficacy for canine B-cell lymphoma: preclinical trial results

    NASA Astrophysics Data System (ADS)

    Choi, H.; Turek, J.; Li, Z.; Childress, M.; Nolte, D.

    2018-02-01

    Biodynamic imaging uses coherence-gated dynamic light scattering to create three dimensional maps of intracellular dynamics in living tissue biopsies. The technique is sensitive to changes in intracellular dynamics dependent on the mechanism of action (MoA) of therapeutics applied in vitro to the living samples. A preclinical trial in the assessment of chemotherapeutic response of dogs with B-cell lymphoma to the doxorubicin-based therapy CHOP has been completed using biodynamic imaging. The trial enrolled 19 canine patients presenting with non-Hodgkin's B-cell lymphoma. Biopsies were acquired through surgery or through needle cores. The time-varying power spectrum of scattered light after drugs are applied ex vivo to the biopsies represent biodynamic biomarkers that are used in machine learning algorithms to predict the patient clinical outcome. Two distinct phenotypes emerged from the analysis that correlate with patient drug resistance or sensitivity. Cross validation of the algorithms perform with an accuracy of 90% in the prediction of dogs that will respond to treatment. Biodynamic imaging has the potential to help select chemotherapy for personalized cancer care.

  16. An integrated framework for multipollutant air quality management and its application in Georgia.

    PubMed

    Cohan, Daniel S; Boylan, James W; Marmur, Amit; Khan, Maudood N

    2007-10-01

    Air protection agencies in the United States increasingly confront non-attainment of air quality standards for multiple pollutants sharing interrelated emission origins. Traditional approaches to attainment planning face important limitations that are magnified in the multipollutant context. Recognizing those limitations, the Georgia Environmental Protection Division has adopted an integrated framework to address ozone, fine particulate matter, and regional haze in the state. Rather than applying atmospheric modeling merely as a final check of an overall strategy, photochemical sensitivity analysis is conducted upfront to compare the effectiveness of controlling various precursor emission species and source regions. Emerging software enables the modeling of health benefits and associated economic valuations resulting from air pollution control. Photochemical sensitivity and health benefits analyses, applied together with traditional cost and feasibility assessments, provide a more comprehensive characterization of the implications of various control options. The fuller characterization both informs the selection of control options and facilitates the communication of impacts to affected stakeholders and the public. Although the integrated framework represents a clear improvement over previous attainment-planning efforts, key remaining shortcomings are also discussed.

  17. Exhaled Aerosol Pattern Discloses Lung Structural Abnormality: A Sensitivity Study Using Computational Modeling and Fractal Analysis

    PubMed Central

    Xi, Jinxiang; Si, Xiuhua A.; Kim, JongWon; Mckee, Edward; Lin, En-Bing

    2014-01-01

    Background Exhaled aerosol patterns, also called aerosol fingerprints, provide clues to the health of the lung and can be used to detect disease-modified airway structures. The key is how to decode the exhaled aerosol fingerprints and retrieve the lung structural information for a non-invasive identification of respiratory diseases. Objective and Methods In this study, a CFD-fractal analysis method was developed to quantify exhaled aerosol fingerprints and applied it to one benign and three malign conditions: a tracheal carina tumor, a bronchial tumor, and asthma. Respirations of tracer aerosols of 1 µm at a flow rate of 30 L/min were simulated, with exhaled distributions recorded at the mouth. Large eddy simulations and a Lagrangian tracking approach were used to simulate respiratory airflows and aerosol dynamics. Aerosol morphometric measures such as concentration disparity, spatial distributions, and fractal analysis were applied to distinguish various exhaled aerosol patterns. Findings Utilizing physiology-based modeling, we demonstrated substantial differences in exhaled aerosol distributions among normal and pathological airways, which were suggestive of the disease location and extent. With fractal analysis, we also demonstrated that exhaled aerosol patterns exhibited fractal behavior in both the entire image and selected regions of interest. Each exhaled aerosol fingerprint exhibited distinct pattern parameters such as spatial probability, fractal dimension, lacunarity, and multifractal spectrum. Furthermore, a correlation of the diseased location and exhaled aerosol spatial distribution was established for asthma. Conclusion Aerosol-fingerprint-based breath tests disclose clues about the site and severity of lung diseases and appear to be sensitive enough to be a practical tool for diagnosis and prognosis of respiratory diseases with structural abnormalities. PMID:25105680

  18. Genome-wide comparison of paired fresh frozen and formalin-fixed paraffin-embedded gliomas by custom BAC and oligonucleotide array comparative genomic hybridization: facilitating analysis of archival gliomas

    PubMed Central

    Mohapatra, Gayatry; Engler, David A.; Starbuck, Kristen D.; Kim, James C.; Bernay, Derek C.; Scangas, George A.; Rousseau, Audrey; Batchelor, Tracy T.; Betensky, Rebecca A.; Louis, David N.

    2010-01-01

    Molecular genetic analysis of cancer is rapidly evolving as a result of improvement in genomic technologies and the growing applicability of such analyses to clinical oncology. Array based comparative genomic hybridization (aCGH) is a powerful tool for detecting DNA copy number alterations (CNA), particularly in solid tumors, and has been applied to the study of malignant gliomas. In the clinical setting, however, gliomas are often sampled by small biopsies and thus formalin-fixed paraffin-embedded (FFPE) blocks are often the only tissue available for genetic analysis, especially for rare types of gliomas. Moreover, the biological basis for the marked intratumoral heterogeneity in gliomas is most readily addressed in FFPE material. Therefore, for gliomas, the ability to use DNA from FFPE tissue is essential for both clinical and research applications. In this study, we have constructed a custom bacterial artificial chromosome (BAC) array and show excellent sensitivity and specificity for detecting CNAs in a panel of paired frozen and FFPE glioma samples. Our study demonstrates a high concordance rate between CNAs detected in FFPE compared to frozen DNA. We have also developed a method of labeling DNA from FFPE tissue that allows efficient hybridization to oligonucleotide arrays. This labeling technique was applied to a panel of biphasic anaplastic oligoastrocytomas (AOA) to identify genetic changes unique to each component. Together, results from these studies suggest that BAC and oligonucleotide aCGH are sensitive tools for detecting CNAs in FFPE DNA, and can enable genome-wide analysis of rare, small and/or histologically heterogeneous gliomas. PMID:21080181

  19. Analysis of group-velocity dispersion of high-frequency Rayleigh waves for near-surface applications

    USGS Publications Warehouse

    Luo, Y.; Xia, J.; Xu, Y.; Zeng, C.

    2011-01-01

    The Multichannel Analysis of Surface Waves (MASW) method is an efficient tool to obtain the vertical shear (S)-wave velocity profile using the dispersive characteristic of Rayleigh waves. Most MASW researchers mainly apply Rayleigh-wave phase-velocity dispersion for S-wave velocity estimation with a few exceptions applying Rayleigh-wave group-velocity dispersion. Herein, we first compare sensitivities of fundamental surface-wave phase velocities with group velocities with three four-layer models including a low-velocity layer or a high-velocity layer. Then synthetic data are simulated by a finite difference method. Images of group-velocity dispersive energy of the synthetic data are generated using the Multiple Filter Analysis (MFA) method. Finally we invert a high-frequency surface-wave group-velocity dispersion curve of a real-world example. Results demonstrate that (1) the sensitivities of group velocities are higher than those of phase velocities and usable frequency ranges are wider than that of phase velocities, which is very helpful in improving inversion stability because for a stable inversion system, small changes in phase velocities do not result in a large fluctuation in inverted S-wave velocities; (2) group-velocity dispersive energy can be measured using single-trace data if Rayleigh-wave fundamental-mode energy is dominant, which suggests that the number of shots required in data acquisition can be dramatically reduced and the horizontal resolution can be greatly improved using analysis of group-velocity dispersion; and (3) the suspension logging results of the real-world example demonstrate that inversion of group velocities generated by the MFA method can successfully estimate near-surface S-wave velocities. ?? 2011 Elsevier B.V.

  20. Neural network modeling and prediction of resistivity structures using VES Schlumberger data over a geothermal area

    NASA Astrophysics Data System (ADS)

    Singh, Upendra K.; Tiwari, R. K.; Singh, S. B.

    2013-03-01

    This paper presents the effects of several parameters on the artificial neural networks (ANN) inversion of vertical electrical sounding (VES) data. Sensitivity of ANN parameters was examined on the performance of adaptive backpropagation (ABP) and Levenberg-Marquardt algorithms (LMA) to test the robustness to noisy synthetic as well as field geophysical data and resolving capability of these methods for predicting the subsurface resistivity layers. We trained, tested and validated ANN using the synthetic VES data as input to the networks and layer parameters of the models as network output. ANN learning parameters are varied and corresponding observations are recorded. The sensitivity analysis of synthetic data and real model demonstrate that ANN algorithms applied in VES data inversion should be considered well not only in terms of accuracy but also in terms of high computational efforts. Also the analysis suggests that ANN model with its various controlling parameters are largely data dependent and hence no unique architecture can be designed for VES data analysis. ANN based methods are also applied to the actual VES field data obtained from the tectonically vital geothermal areas of Jammu and Kashmir, India. Analysis suggests that both the ABP and LMA are suitable methods for 1-D VES modeling. But the LMA method provides greater degree of robustness than the ABP in case of 2-D VES modeling. Comparison of the inversion results with known lithology correlates well and also reveals the additional significant feature of reconsolidated breccia of about 7.0 m thickness beneath the overburden in some cases like at sounding point RDC-5. We may therefore conclude that ANN based methods are significantly faster and efficient for detection of complex layered resistivity structures with a relatively greater degree of precision and resolution.

  1. Rapid and sensitive analysis of 27 underivatized free amino acids, dipeptides, and tripeptides in fruits of Siraitia grosvenorii Swingle using HILIC-UHPLC-QTRAP(®)/MS (2) combined with chemometrics methods.

    PubMed

    Zhou, Guisheng; Wang, Mengyue; Li, Yang; Peng, Ying; Li, Xiaobo

    2015-08-01

    In the present study, a new strategy based on chemical analysis and chemometrics methods was proposed for the comprehensive analysis and profiling of underivatized free amino acids (FAAs) and small peptides among various Luo-Han-Guo (LHG) samples. Firstly, the ultrasound-assisted extraction (UAE) parameters were optimized using Plackett-Burman (PB) screening and Box-Behnken designs (BBD), and the following optimal UAE conditions were obtained: ultrasound power of 280 W, extraction time of 43 min, and the solid-liquid ratio of 302 mL/g. Secondly, a rapid and sensitive analytical method was developed for simultaneous quantification of 24 FAAs and 3 active small peptides in LHG at trace levels using hydrophilic interaction ultra-performance liquid chromatography coupled with triple-quadrupole linear ion-trap tandem mass spectrometry (HILIC-UHPLC-QTRAP(®)/MS(2)). The analytical method was validated by matrix effects, linearity, LODs, LOQs, precision, repeatability, stability, and recovery. Thirdly, the proposed optimal UAE conditions and analytical methods were applied to measurement of LHG samples. It was shown that LHG was rich in essential amino acids, which were beneficial nutrient substances for human health. Finally, based on the contents of the 27 analytes, the chemometrics methods of unsupervised principal component analysis (PCA) and supervised counter propagation artificial neural network (CP-ANN) were applied to differentiate and classify the 40 batches of LHG samples from different cultivated forms, regions, and varieties. As a result, these samples were mainly clustered into three clusters, which illustrated the cultivating disparity among the samples. In summary, the presented strategy had potential for the investigation of edible plants and agricultural products containing FAAs and small peptides.

  2. Sensitivity and Uncertainty Analysis for Streamflow Prediction Using Different Objective Functions and Optimization Algorithms: San Joaquin California

    NASA Astrophysics Data System (ADS)

    Paul, M.; Negahban-Azar, M.

    2017-12-01

    The hydrologic models usually need to be calibrated against observed streamflow at the outlet of a particular drainage area through a careful model calibration. However, a large number of parameters are required to fit in the model due to their unavailability of the field measurement. Therefore, it is difficult to calibrate the model for a large number of potential uncertain model parameters. This even becomes more challenging if the model is for a large watershed with multiple land uses and various geophysical characteristics. Sensitivity analysis (SA) can be used as a tool to identify most sensitive model parameters which affect the calibrated model performance. There are many different calibration and uncertainty analysis algorithms which can be performed with different objective functions. By incorporating sensitive parameters in streamflow simulation, effects of the suitable algorithm in improving model performance can be demonstrated by the Soil and Water Assessment Tool (SWAT) modeling. In this study, the SWAT was applied in the San Joaquin Watershed in California covering 19704 km2 to calibrate the daily streamflow. Recently, sever water stress escalating due to intensified climate variability, prolonged drought and depleting groundwater for agricultural irrigation in this watershed. Therefore it is important to perform a proper uncertainty analysis given the uncertainties inherent in hydrologic modeling to predict the spatial and temporal variation of the hydrologic process to evaluate the impacts of different hydrologic variables. The purpose of this study was to evaluate the sensitivity and uncertainty of the calibrated parameters for predicting streamflow. To evaluate the sensitivity of the calibrated parameters three different optimization algorithms (Sequential Uncertainty Fitting- SUFI-2, Generalized Likelihood Uncertainty Estimation- GLUE and Parameter Solution- ParaSol) were used with four different objective functions (coefficient of determination- r2, Nash-Sutcliffe efficiency- NSE, percent bias- PBIAS, and Kling-Gupta efficiency- KGE). The preliminary results showed that using the SUFI-2 algorithm with the objective function NSE and KGE has improved significantly the calibration (e.g. R2 and NSE is found 0.52 and 0.47 respectively for daily streamflow calibration).

  3. Lung nodule detection by microdose CT versus chest radiography (standard and dual-energy subtracted).

    PubMed

    Ebner, Lukas; Bütikofer, Yanik; Ott, Daniel; Huber, Adrian; Landau, Julia; Roos, Justus E; Heverhagen, Johannes T; Christe, Andreas

    2015-04-01

    The purpose of this study was to investigate the feasibility of microdose CT using a comparable dose as for conventional chest radiographs in two planes including dual-energy subtraction for lung nodule assessment. We investigated 65 chest phantoms with 141 lung nodules, using an anthropomorphic chest phantom with artificial lung nodules. Microdose CT parameters were 80 kV and 6 mAs, with pitch of 2.2. Iterative reconstruction algorithms and an integrated circuit detector system (Stellar, Siemens Healthcare) were applied for maximum dose reduction. Maximum intensity projections (MIPs) were reconstructed. Chest radiographs were acquired in two projections with bone suppression. Four blinded radiologists interpreted the images in random order. A soft-tissue CT kernel (I30f) delivered better sensitivities in a pilot study than a hard kernel (I70f), with respective mean (SD) sensitivities of 91.1%±2.2% versus 85.6%±5.6% (p=0.041). Nodule size was measured accurately for all kernels. Mean clustered nodule sensitivity with chest radiography was 45.7%±8.1% (with bone suppression, 46.1%±8%; p=0.94); for microdose CT, nodule sensitivity was 83.6%±9% without MIP (with additional MIP, 92.5%±6%; p<10(-3)). Individual sensitivities of microdose CT for readers 1, 2, 3, and 4 were 84.3%, 90.7%, 68.6%, and 45.0%, respectively. Sensitivities with chest radiography for readers 1, 2, 3, and 4 were 42.9%, 58.6%, 36.4%, and 90.7%, respectively. In the per-phantom analysis, respective sensitivities of microdose CT versus chest radiography were 96.2% and 75% (p<10(-6)). The effective dose for chest radiography including dual-energy subtraction was 0.242 mSv; for microdose CT, the applied dose was 0.1323 mSv. Microdose CT is better than the combination of chest radiography and dual-energy subtraction for the detection of solid nodules between 5 and 12 mm at a lower dose level of 0.13 mSv. Soft-tissue kernels allow better sensitivities. These preliminary results indicate that microdose CT has the potential to replace conventional chest radiography for lung nodule detection.

  4. Highly sensitive catalytic spectrophotometric determination of ruthenium

    NASA Astrophysics Data System (ADS)

    Naik, Radhey M.; Srivastava, Abhishek; Prasad, Surendra

    2008-01-01

    A new and highly sensitive catalytic kinetic method (CKM) for the determination of ruthenium(III) has been established based on its catalytic effect on the oxidation of L-phenylalanine ( L-Pheala) by KMnO 4 in highly alkaline medium. The reaction has been followed spectrophotometrically by measuring the decrease in the absorbance at 526 nm. The proposed CKM is based on the fixed time procedure under optimum reaction conditions. It relies on the linear relationship where the change in the absorbance (Δ At) versus added Ru(III) amounts in the range of 0.101-2.526 ng ml -1 is plotted. Under the optimum conditions, the sensitivity of the proposed method, i.e. the limit of detection corresponding to 5 min is 0.08 ng ml -1, and decreases with increased time of analysis. The method is featured with good accuracy and reproducibility for ruthenium(III) determination. The ruthenium(III) has also been determined in presence of several interfering and non-interfering cations, anions and polyaminocarboxylates. No foreign ions interfered in the determination ruthenium(III) up to 20-fold higher concentration of foreign ions. In addition to standard solutions analysis, this method was successfully applied for the quantitative determination of ruthenium(III) in drinking water samples. The method is highly sensitive, selective and very stable. A review of recently published catalytic spectrophotometric methods for the determination of ruthenium(III) has also been presented for comparison.

  5. Determination of protection zones for Dutch groundwater wells against virus contamination--uncertainty and sensitivity analysis.

    PubMed

    Schijven, J F; Mülschlegel, J H C; Hassanizadeh, S M; Teunis, P F M; de Roda Husman, A M

    2006-09-01

    Protection zones of shallow unconfined aquifers in The Netherlands were calculated that allow protection against virus contamination to the level that the infection risk of 10(-4) per person per year is not exceeded with a 95% certainty. An uncertainty and a sensitivity analysis of the calculated protection zones were included. It was concluded that protection zones of 1 to 2 years travel time (206-418 m) are needed (6 to 12 times the currently applied travel time of 60 days). This will lead to enlargement of protection zones, encompassing 110 unconfined groundwater well systems that produce 3 x 10(8) m3 y(-1) of drinking water (38% of total Dutch production from groundwater). A smaller protection zone is possible if it can be shown that an aquifer has properties that lead to greater reduction of virus contamination, like more attachment. Deeper aquifers beneath aquitards of at least 2 years of vertical travel time are adequately protected because vertical flow in the aquitards is only 0.7 m per year. The most sensitive parameters are virus attachment and inactivation. The next most sensitive parameters are grain size of the sand, abstraction rate of groundwater, virus concentrations in raw sewage and consumption of unboiled drinking water. Research is recommended on additional protection by attachment and under unsaturated conditions.

  6. First status report on regional ground-water flow modeling for the Paradox Basin, Utah

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, R.W.

    1984-05-01

    Regional ground-water flow within the principal hydrogeologic units of the Paradox Basin is evaluated by developing a conceptual model of the flow regime in the shallow aquifers and the deep-basin brine aquifers and testing these models using a three-dimensional, finite-difference flow code. Semiquantitative sensitivity analysis (a limited parametric study) is conducted to define the system response to changes in hydrologic properties or boundary conditions. A direct method for sensitivity analysis using an adjoint form of the flow equation is applied to the conceptualized flow regime in the Leadville limestone aquifer. All steps leading to the final results and conclusions aremore » incorporated in this report. The available data utilized in this study is summarized. The specific conceptual models, defining the areal and vertical averaging of litho-logic units, aquifer properties, fluid properties, and hydrologic boundary conditions, are described in detail. Two models were evaluated in this study: a regional model encompassing the hydrogeologic units above and below the Paradox Formation/Hermosa Group and a refined scale model which incorporated only the post Paradox strata. The results are delineated by the simulated potentiometric surfaces and tables summarizing areal and vertical boundary fluxes, Darcy velocities at specific points, and ground-water travel paths. Results from the adjoint sensitivity analysis include importance functions and sensitivity coefficients, using heads or the average Darcy velocities to represent system response. The reported work is the first stage of an ongoing evaluation of the Gibson Dome area within the Paradox Basin as a potential repository for high-level radioactive wastes.« less

  7. A sensitive and rapid assay for 4-aminophenol in paracetamol drug and tablet formulation, by flow injection analysis with spectrophotometric detection.

    PubMed

    Bloomfield, M S

    2002-12-06

    4-Aminophenol (4AP) is the primary degradation product of paracetamol which is limited at a low level (50 ppm or 0.005% w/w) in the drug substance by the European, United States, British and German Pharmacopoeias, employing a manual colourimetric limit test. The 4AP limit is widened to 1000 ppm or 0.1% w/w for the tablet product monographs, which quote the use of a less sensitive automated HPLC method. The lower drug substance specification limit is applied to our products, (50 ppm, equivalent to 25 mug 4AP in a tablet containing 500-mg paracetamol) and the pharmacopoeial HPLC assay was not suitable at this low level due to matrix interference. For routine analysis a rapid, automated assay was required. This paper presents a highly sensitive, precise and automated method employing the technique of Flow Injection (FI) analysis to quantitatively assay low levels of this degradant. A solution of the drug substance, or an extract of the tablets, containing 4AP and paracetamol is injected into a solvent carrier stream and merged on-line with alkaline sodium nitroprusside reagent, to form a specific blue derivative which is detected spectrophotometrically at 710 nm. Standard HPLC equipment is used throughout. The procedure is fully quantitative and has been optimised for sensitivity and robustness using a multivariate experimental design (multi-level 'Central Composite' response surface) model. The method has been fully validated and is linear down to 0.01 mug ml(-1). The approach should be applicable to a range of paracetamol products.

  8. A sensitivity analysis of process design parameters, commodity prices and robustness on the economics of odour abatement technologies.

    PubMed

    Estrada, José M; Kraakman, N J R Bart; Lebrero, Raquel; Muñoz, Raúl

    2012-01-01

    The sensitivity of the economics of the five most commonly applied odour abatement technologies (biofiltration, biotrickling filtration, activated carbon adsorption, chemical scrubbing and a hybrid technology consisting of a biotrickling filter coupled with carbon adsorption) towards design parameters and commodity prices was evaluated. Besides, the influence of the geographical location on the Net Present Value calculated for a 20 years lifespan (NPV20) of each technology and its robustness towards typical process fluctuations and operational upsets were also assessed. This comparative analysis showed that biological techniques present lower operating costs (up to 6 times) and lower sensitivity than their physical/chemical counterparts, with the packing material being the key parameter affecting their operating costs (40-50% of the total operating costs). The use of recycled or partially treated water (e.g. secondary effluent in wastewater treatment plants) offers an opportunity to significantly reduce costs in biological techniques. Physical/chemical technologies present a high sensitivity towards H2S concentration, which is an important drawback due to the fluctuating nature of malodorous emissions. The geographical analysis evidenced high NPV20 variations around the world for all the technologies evaluated, but despite the differences in wage and price levels, biofiltration and biotrickling filtration are always the most cost-efficient alternatives (NPV20). When, in an economical evaluation, the robustness is as relevant as the overall costs (NPV20), the hybrid technology would move up next to BTF as the most preferred technologies. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Diagnosis of human malignancies using laser-induced breakdown spectroscopy in combination with chemometric methods

    NASA Astrophysics Data System (ADS)

    Chen, Xue; Li, Xiaohui; Yu, Xin; Chen, Deying; Liu, Aichun

    2018-01-01

    Diagnosis of malignancies is a challenging clinical issue. In this work, we present quick and robust diagnosis and discrimination of lymphoma and multiple myeloma (MM) using laser-induced breakdown spectroscopy (LIBS) conducted on human serum samples, in combination with chemometric methods. The serum samples collected from lymphoma and MM cancer patients and healthy controls were deposited on filter papers and ablated with a pulsed 1064 nm Nd:YAG laser. 24 atomic lines of Ca, Na, K, H, O, and N were selected for malignancy diagnosis. Principal component analysis (PCA), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and k nearest neighbors (kNN) classification were applied to build the malignancy diagnosis and discrimination models. The performances of the models were evaluated using 10-fold cross validation. The discrimination accuracy, confusion matrix and receiver operating characteristic (ROC) curves were obtained. The values of area under the ROC curve (AUC), sensitivity and specificity at the cut-points were determined. The kNN model exhibits the best performances with overall discrimination accuracy of 96.0%. Distinct discrimination between malignancies and healthy controls has been achieved with AUC, sensitivity and specificity for healthy controls all approaching 1. For lymphoma, the best discrimination performance values are AUC = 0.990, sensitivity = 0.970 and specificity = 0.956. For MM, the corresponding values are AUC = 0.986, sensitivity = 0.892 and specificity = 0.994. The results show that the serum-LIBS technique can serve as a quick, less invasive and robust method for diagnosis and discrimination of human malignancies.

  10. Diagnosis of tuberculosis pleurisy with adenosine deaminase (ADA): a systematic review and meta-analysis.

    PubMed

    Gui, Xuwei; Xiao, Heping

    2014-01-01

    This systematic review and meta-analysis was performed to determine accuracy and usefulness of adenosine deaminase (ADA) in diagnosis of tuberculosis pleurisy. Medline, Google scholar and Web of Science databases were searched to identify related studies until 2014. Two reviewers independently assessed quality of studies included according to standard Quality Assessment of Diagnosis Accuracy Studies (QUADAS) criteria. The sensitivity, specificity, diagnostic odds ratio and other parameters of ADA in diagnosis of tuberculosis pleurisy were analyzed with Meta-DiSC1.4 software, and pooled using the random effects model. Twelve studies including 865 tuberculosis pleurisy patients and 1379 non-tuberculosis pleurisy subjects were identified from 110 studies for this meta-analysis. The sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR) and diagnosis odds ratio (DOR) of ADA in the diagnosis of tuberculosis pleurisy were 45.25 (95% CI 27.63-74.08), 0.86 (95% CI 0.84-0.88), 0.88 (95% CI 0.86-0.90), 6.32 (95% CI 4.83-8.26) and 0.15 (95% 0.11-0.22), respectively. The area under the summary receiver operating characteristic curve (SROC) was 0.9340. Our results demonstrate that the sensitivity and specificity of ADA are high in the diagnosis of tuberculosis pleurisy especially when ADA≥50 (U/L). Thus, ADA is a relatively sensitive and specific marker for tuberculosis pleurisy diagnosis. However, it is cautious to apply these results due to the heterogeneity in study design of these studies. Further studies are required to confirm the optimal cut-off value of ADA.

  11. Rapid and sensitive analysis of phthalate metabolites, bisphenol A, and endogenous steroid hormones in human urine by mixed-mode solid-phase extraction, dansylation, and ultra-performance liquid chromatography coupled with triple quadrupole mass spectrometry.

    PubMed

    Wang, He-xing; Wang, Bin; Zhou, Ying; Jiang, Qing-wu

    2013-05-01

    Steroid hormone levels in human urine are convenient and sensitive indicators for the impact of phthalates and/or bisphenol A (BPA) exposure on the human steroid hormone endocrine system. In this study, a rapid and sensitive method for determination of 14 phthalate metabolites, BPA, and ten endogenous steroid hormones in urine was developed and validated on the basis of ultra-performance liquid chromatography coupled with electrospray ionization triple quadrupole mass spectrometry. The optimized mixed-mode solid phase-extraction separated the weakly acidic or neutral BPA and steroid hormones from acidic phthalate metabolites in urine: the former were determined in positive ion mode with a methanol/water mobile phase containing 10 mM ammonium formate; the latter were determined in negative ion mode with a acetonitrile/water mobile phase containing 0.1 % acetic acid, which significantly alleviated matrix effects for the analysis of BPA and steroid hormones. Dansylation of estrogens and BPA realized simultaneous and sensitive analysis of the endogenous steroid hormones and BPA in a single chromatographic run. The limits of detection were less than 0.84 ng/mL for phthalate metabolites and less than 0.22 ng/mL for endogenous steroid hormones and BPA. This proposed method had satisfactory precision and accuracy, and was successfully applied to the analyses of human urine samples. This method could be valuable when investigating the associations among endocrine-disrupting chemicals, endogenous steroid hormones, and relevant adverse outcomes in epidemiological studies.

  12. Sensitivity of emergent sociohydrologic dynamics to internal system properties and external sociopolitical factors: Implications for water management

    NASA Astrophysics Data System (ADS)

    Elshafei, Y.; Tonts, M.; Sivapalan, M.; Hipsey, M. R.

    2016-06-01

    It is increasingly acknowledged that effective management of water resources requires a holistic understanding of the coevolving dynamics inherent in the coupled human-hydrology system. One of the fundamental information gaps concerns the sensitivity of coupled system feedbacks to various endogenous system properties and exogenous societal contexts. This paper takes a previously calibrated sociohydrology model and applies an idealized implementation, in order to: (i) explore the sensitivity of emergent dynamics resulting from bidirectional feedbacks to assumptions regarding (a) internal system properties that control the internal dynamics of the coupled system and (b) the external sociopolitical context; and (ii) interpret the results within the context of water resource management decision making. The analysis investigates feedback behavior in three ways, (a) via a global sensitivity analysis on key parameters and assessment of relevant model outputs, (b) through a comparative analysis based on hypothetical placement of the catchment along various points on the international sociopolitical gradient, and (c) by assessing the effects of various direct management intervention scenarios. Results indicate the presence of optimum windows that might offer the greatest positive impact per unit of management effort. Results further advocate management tools that encourage an adaptive learning, community-based approach with respect to water management, which are found to enhance centralized policy measures. This paper demonstrates that it is possible to use a place-based sociohydrology model to make abstractions as to the dynamics of bidirectional feedback behavior, and provide insights as to the efficacy of water management tools under different circumstances.

  13. New sensitive high-performance liquid chromatography-tandem mass spectrometry method for the detection of horse and pork in halal beef.

    PubMed

    von Bargen, Christoph; Dojahn, Jörg; Waidelich, Dietmar; Humpf, Hans-Ulrich; Brockmeyer, Jens

    2013-12-11

    The accidental or fraudulent blending of meat from different species is a highly relevant aspect for food product quality control, especially for consumers with ethical concerns against species, such as horse or pork. In this study, we present a sensitive mass spectrometrical approach for the detection of trace contaminations of horse meat and pork and demonstrate the specificity of the identified biomarker peptides against chicken, lamb, and beef. Biomarker peptides were identified by a shotgun proteomic approach using tryptic digests of protein extracts and were verified by the analysis of 21 different meat samples from the 5 species included in this study. For the most sensitive peptides, a multiple reaction monitoring (MRM) method was developed that allows for the detection of 0.55% horse or pork in a beef matrix. To enhance sensitivity, we applied MRM(3) experiments and were able to detect down to 0.13% pork contamination in beef. To the best of our knowledge, we present here the first rapid and sensitive mass spectrometrical method for the detection of horse and pork by use of MRM and MRM(3).

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Stacy; English, Shawn; Briggs, Timothy

    Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be definedmore » through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.« less

  15. Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes

    NASA Astrophysics Data System (ADS)

    Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris

    2017-12-01

    Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.

  16. Segmentation of epidermal tissue with histopathological damage in images of haematoxylin and eosin stained human skin

    PubMed Central

    2014-01-01

    Background Digital image analysis has the potential to address issues surrounding traditional histological techniques including a lack of objectivity and high variability, through the application of quantitative analysis. A key initial step in image analysis is the identification of regions of interest. A widely applied methodology is that of segmentation. This paper proposes the application of image analysis techniques to segment skin tissue with varying degrees of histopathological damage. The segmentation of human tissue is challenging as a consequence of the complexity of the tissue structures and inconsistencies in tissue preparation, hence there is a need for a new robust method with the capability to handle the additional challenges materialising from histopathological damage. Methods A new algorithm has been developed which combines enhanced colour information, created following a transformation to the L*a*b* colourspace, with general image intensity information. A colour normalisation step is included to enhance the algorithm’s robustness to variations in the lighting and staining of the input images. The resulting optimised image is subjected to thresholding and the segmentation is fine-tuned using a combination of morphological processing and object classification rules. The segmentation algorithm was tested on 40 digital images of haematoxylin & eosin (H&E) stained skin biopsies. Accuracy, sensitivity and specificity of the algorithmic procedure were assessed through the comparison of the proposed methodology against manual methods. Results Experimental results show the proposed fully automated methodology segments the epidermis with a mean specificity of 97.7%, a mean sensitivity of 89.4% and a mean accuracy of 96.5%. When a simple user interaction step is included, the specificity increases to 98.0%, the sensitivity to 91.0% and the accuracy to 96.8%. The algorithm segments effectively for different severities of tissue damage. Conclusions Epidermal segmentation is a crucial first step in a range of applications including melanoma detection and the assessment of histopathological damage in skin. The proposed methodology is able to segment the epidermis with different levels of histological damage. The basic method framework could be applied to segmentation of other epithelial tissues. PMID:24521154

  17. A Grazing Incidence Spectrograph as Applied to Vacuum Ultraviolet, Soft X-Ray, Pulsed Plasma Sources.

    DTIC Science & Technology

    A 2.2-meter variable angle of incidence grazing incidence spectrograph is described for photographic recording of spectra down to 10A. Also a method for determining the absolute total fluence from a pulsed plasma source, knowing the absolute sensitivity of the instrument, is described. Spectra are presented from a low-inductance sliding spark gap and a 20-kj dense plasma focus . A program for spectram analysis is included. (Modified author abstract)

  18. A microanalysis approach to investigate problems encountered in mycology.

    PubMed Central

    Thibaut, M.; Ansel, M.; de Azevedo Carneiro, J.

    1978-01-01

    X-ray microanalysis has been applied to the study of pathogenic fungi for the acquisition of chemical information. The technique of combined scanning electron microscopy and wavelength dispersive spectrometry is described. The chemical analysis depends on the characteristic x-ray spectrum excited by the electrons passing through the sample. This spectrum is analyzed by x-ray wavelength dispersion using crystal spectrometers. All the elements of the periodic system above beryllium can be detected with good sensitivity. PMID:619693

  19. Robust Sensitivity Analysis for the Joint Improvised Explosive Device Defeat Organization (JIEDDO) Proposal Selection Model

    DTIC Science & Technology

    2009-03-01

    making process (Skinner, 2001, 9). According to Clemen , before we can begin to apply any methodology to a specific decision problem, the analyst...it is possible to work with them to determine the values and objectives that relate to the decision in question ( Clemen , 2001, 21). Clemen ...value hierarchy is constructed, Clemen and Reilly suggest that a trade off is made between varying objectives. They introduce weights to determine

  20. Prediction of zeolite-cement-sand unconfined compressive strength using polynomial neural network

    NASA Astrophysics Data System (ADS)

    MolaAbasi, H.; Shooshpasha, I.

    2016-04-01

    The improvement of local soils with cement and zeolite can provide great benefits, including strengthening slopes in slope stability problems, stabilizing problematic soils and preventing soil liquefaction. Recently, dosage methodologies are being developed for improved soils based on a rational criterion as it exists in concrete technology. There are numerous earlier studies showing the possibility of relating Unconfined Compressive Strength (UCS) and Cemented sand (CS) parameters (voids/cement ratio) as a power function fits. Taking into account the fact that the existing equations are incapable of estimating UCS for zeolite cemented sand mixture (ZCS) well, artificial intelligence methods are used for forecasting them. Polynomial-type neural network is applied to estimate the UCS from more simply determined index properties such as zeolite and cement content, porosity as well as curing time. In order to assess the merits of the proposed approach, a total number of 216 unconfined compressive tests have been done. A comparison is carried out between the experimentally measured UCS with the predictions in order to evaluate the performance of the current method. The results demonstrate that generalized polynomial-type neural network has a great ability for prediction of the UCS. At the end sensitivity analysis of the polynomial model is applied to study the influence of input parameters on model output. The sensitivity analysis reveals that cement and zeolite content have significant influence on predicting UCS.

Top